Test Report: Hyperkit_macOS 18711

                    
                      d0c8b6a0bda25d1a1bd2a775bc56b8f16412b6e8:2024-04-22:34150
                    
                

Test fail (9/332)

x
+
TestMultiControlPlane/serial/CopyFile (375.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status --output json -v=7 --alsologtostderr
E0422 04:03:44.367728    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 04:04:53.314463    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status --output json -v=7 --alsologtostderr: exit status 3 (5m0.218751905s)

                                                
                                                
-- stdout --
	[{"Name":"ha-069000","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Configured","Worker":false},{"Name":"ha-069000-m02","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Configured","Worker":false},{"Name":"ha-069000-m03","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Configured","Worker":false},{"Name":"ha-069000-m04","Host":"Error","Kubelet":"Nonexistent","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 04:01:56.117651    3781 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:01:56.118228    3781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:01:56.118236    3781 out.go:304] Setting ErrFile to fd 2...
	I0422 04:01:56.118240    3781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:01:56.118440    3781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:01:56.118683    3781 out.go:298] Setting JSON to true
	I0422 04:01:56.118706    3781 mustload.go:65] Loading cluster: ha-069000
	I0422 04:01:56.118750    3781 notify.go:220] Checking for updates...
	I0422 04:01:56.119214    3781 config.go:182] Loaded profile config "ha-069000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:01:56.119261    3781 status.go:255] checking status of ha-069000 ...
	I0422 04:01:56.119712    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:01:56.119761    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:01:56.129448    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50971
	I0422 04:01:56.129818    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:01:56.130219    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:01:56.130258    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:01:56.130507    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:01:56.130645    3781 main.go:141] libmachine: (ha-069000) Calling .GetState
	I0422 04:01:56.130729    3781 main.go:141] libmachine: (ha-069000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:01:56.130809    3781 main.go:141] libmachine: (ha-069000) DBG | hyperkit pid from json: 3181
	I0422 04:01:56.131815    3781 status.go:330] ha-069000 host status = "Running" (err=<nil>)
	I0422 04:01:56.131833    3781 host.go:66] Checking if "ha-069000" exists ...
	I0422 04:01:56.132077    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:01:56.132102    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:01:56.141552    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50974
	I0422 04:01:56.141921    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:01:56.142302    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:01:56.142312    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:01:56.142583    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:01:56.142713    3781 main.go:141] libmachine: (ha-069000) Calling .GetIP
	I0422 04:01:56.142816    3781 host.go:66] Checking if "ha-069000" exists ...
	I0422 04:01:56.143115    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:01:56.143152    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:01:56.153340    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50976
	I0422 04:01:56.153716    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:01:56.162172    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:01:56.162195    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:01:56.162489    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:01:56.162626    3781 main.go:141] libmachine: (ha-069000) Calling .DriverName
	I0422 04:01:56.162791    3781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:01:56.162814    3781 main.go:141] libmachine: (ha-069000) Calling .GetSSHHostname
	I0422 04:01:56.162906    3781 main.go:141] libmachine: (ha-069000) Calling .GetSSHPort
	I0422 04:01:56.163010    3781 main.go:141] libmachine: (ha-069000) Calling .GetSSHKeyPath
	I0422 04:01:56.163108    3781 main.go:141] libmachine: (ha-069000) Calling .GetSSHUsername
	I0422 04:01:56.163221    3781 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000/id_rsa Username:docker}
	W0422 04:03:11.167062    3781 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.6:22: connect: operation timed out
	W0422 04:03:11.167131    3781 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0422 04:03:11.167141    3781 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	I0422 04:03:11.167151    3781 status.go:257] ha-069000 status: &{Name:ha-069000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 04:03:11.167162    3781 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	I0422 04:03:11.167169    3781 status.go:255] checking status of ha-069000-m02 ...
	I0422 04:03:11.167450    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:03:11.167474    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:03:11.176662    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50979
	I0422 04:03:11.176989    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:03:11.177347    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:03:11.177362    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:03:11.177553    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:03:11.177659    3781 main.go:141] libmachine: (ha-069000-m02) Calling .GetState
	I0422 04:03:11.177731    3781 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:03:11.177815    3781 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 3228
	I0422 04:03:11.178801    3781 status.go:330] ha-069000-m02 host status = "Running" (err=<nil>)
	I0422 04:03:11.178809    3781 host.go:66] Checking if "ha-069000-m02" exists ...
	I0422 04:03:11.179055    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:03:11.179074    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:03:11.187658    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50981
	I0422 04:03:11.188016    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:03:11.188375    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:03:11.188407    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:03:11.188602    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:03:11.188715    3781 main.go:141] libmachine: (ha-069000-m02) Calling .GetIP
	I0422 04:03:11.188808    3781 host.go:66] Checking if "ha-069000-m02" exists ...
	I0422 04:03:11.189067    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:03:11.189090    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:03:11.197666    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50983
	I0422 04:03:11.198002    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:03:11.198365    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:03:11.198386    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:03:11.198604    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:03:11.198733    3781 main.go:141] libmachine: (ha-069000-m02) Calling .DriverName
	I0422 04:03:11.198870    3781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:03:11.198884    3781 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHHostname
	I0422 04:03:11.198972    3781 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHPort
	I0422 04:03:11.199053    3781 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHKeyPath
	I0422 04:03:11.199166    3781 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHUsername
	I0422 04:03:11.199260    3781 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/id_rsa Username:docker}
	W0422 04:04:26.200772    3781 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.7:22: connect: operation timed out
	W0422 04:04:26.200832    3781 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.7:22: connect: operation timed out
	E0422 04:04:26.200844    3781 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.7:22: connect: operation timed out
	I0422 04:04:26.200852    3781 status.go:257] ha-069000-m02 status: &{Name:ha-069000-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 04:04:26.200865    3781 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.7:22: connect: operation timed out
	I0422 04:04:26.200870    3781 status.go:255] checking status of ha-069000-m03 ...
	I0422 04:04:26.201148    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:04:26.201175    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:04:26.210809    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50986
	I0422 04:04:26.211170    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:04:26.211501    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:04:26.211515    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:04:26.211750    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:04:26.211860    3781 main.go:141] libmachine: (ha-069000-m03) Calling .GetState
	I0422 04:04:26.211950    3781 main.go:141] libmachine: (ha-069000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:04:26.212041    3781 main.go:141] libmachine: (ha-069000-m03) DBG | hyperkit pid from json: 3336
	I0422 04:04:26.213084    3781 status.go:330] ha-069000-m03 host status = "Running" (err=<nil>)
	I0422 04:04:26.213096    3781 host.go:66] Checking if "ha-069000-m03" exists ...
	I0422 04:04:26.213380    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:04:26.213429    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:04:26.222489    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50988
	I0422 04:04:26.222873    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:04:26.223249    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:04:26.223266    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:04:26.223497    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:04:26.223635    3781 main.go:141] libmachine: (ha-069000-m03) Calling .GetIP
	I0422 04:04:26.223727    3781 host.go:66] Checking if "ha-069000-m03" exists ...
	I0422 04:04:26.224002    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:04:26.224026    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:04:26.232931    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50990
	I0422 04:04:26.233307    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:04:26.233644    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:04:26.233669    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:04:26.233878    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:04:26.233991    3781 main.go:141] libmachine: (ha-069000-m03) Calling .DriverName
	I0422 04:04:26.234117    3781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:04:26.234129    3781 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHHostname
	I0422 04:04:26.234238    3781 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHPort
	I0422 04:04:26.234314    3781 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHKeyPath
	I0422 04:04:26.234412    3781 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHUsername
	I0422 04:04:26.234487    3781 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m03/id_rsa Username:docker}
	W0422 04:05:41.239086    3781 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.8:22: connect: operation timed out
	W0422 04:05:41.239166    3781 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.8:22: connect: operation timed out
	E0422 04:05:41.239186    3781 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.8:22: connect: operation timed out
	I0422 04:05:41.239197    3781 status.go:257] ha-069000-m03 status: &{Name:ha-069000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 04:05:41.239213    3781 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.8:22: connect: operation timed out
	I0422 04:05:41.239220    3781 status.go:255] checking status of ha-069000-m04 ...
	I0422 04:05:41.239625    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:05:41.239692    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:05:41.249938    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50993
	I0422 04:05:41.250267    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:05:41.250614    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:05:41.250627    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:05:41.250838    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:05:41.250951    3781 main.go:141] libmachine: (ha-069000-m04) Calling .GetState
	I0422 04:05:41.251034    3781 main.go:141] libmachine: (ha-069000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:05:41.251134    3781 main.go:141] libmachine: (ha-069000-m04) DBG | hyperkit pid from json: 3553
	I0422 04:05:41.252140    3781 status.go:330] ha-069000-m04 host status = "Running" (err=<nil>)
	I0422 04:05:41.252152    3781 host.go:66] Checking if "ha-069000-m04" exists ...
	I0422 04:05:41.252399    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:05:41.252421    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:05:41.261715    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50995
	I0422 04:05:41.262066    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:05:41.262414    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:05:41.262431    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:05:41.262638    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:05:41.262748    3781 main.go:141] libmachine: (ha-069000-m04) Calling .GetIP
	I0422 04:05:41.262831    3781 host.go:66] Checking if "ha-069000-m04" exists ...
	I0422 04:05:41.263093    3781 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:05:41.263115    3781 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:05:41.271728    3781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50997
	I0422 04:05:41.272043    3781 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:05:41.272442    3781 main.go:141] libmachine: Using API Version  1
	I0422 04:05:41.272467    3781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:05:41.272699    3781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:05:41.272829    3781 main.go:141] libmachine: (ha-069000-m04) Calling .DriverName
	I0422 04:05:41.273874    3781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:05:41.273885    3781 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHHostname
	I0422 04:05:41.273972    3781 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHPort
	I0422 04:05:41.274064    3781 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHKeyPath
	I0422 04:05:41.274175    3781 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHUsername
	I0422 04:05:41.274265    3781 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m04/id_rsa Username:docker}
	W0422 04:06:56.276169    3781 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.9:22: connect: operation timed out
	W0422 04:06:56.276236    3781 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	E0422 04:06:56.276254    3781 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	I0422 04:06:56.276266    3781 status.go:257] ha-069000-m04 status: &{Name:ha-069000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0422 04:06:56.276279    3781 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-069000 status --output json -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000: exit status 3 (1m15.106370457s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 04:08:11.386087    4140 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0422 04:08:11.386101    4140 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-069000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/CopyFile (375.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (383.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 node stop m02 -v=7 --alsologtostderr
E0422 04:08:44.375116    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-069000 node stop m02 -v=7 --alsologtostderr: (1m23.221968921s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
E0422 04:09:53.323509    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 04:10:07.422641    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: exit status 7 (3m45.179392383s)

                                                
                                                
-- stdout --
	ha-069000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-069000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-069000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-069000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 04:09:34.673247    4323 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:09:34.673461    4323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:09:34.673467    4323 out.go:304] Setting ErrFile to fd 2...
	I0422 04:09:34.673471    4323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:09:34.673675    4323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:09:34.673858    4323 out.go:298] Setting JSON to false
	I0422 04:09:34.673881    4323 mustload.go:65] Loading cluster: ha-069000
	I0422 04:09:34.673917    4323 notify.go:220] Checking for updates...
	I0422 04:09:34.674231    4323 config.go:182] Loaded profile config "ha-069000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:09:34.674247    4323 status.go:255] checking status of ha-069000 ...
	I0422 04:09:34.674637    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:09:34.674690    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:09:34.683388    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51012
	I0422 04:09:34.683693    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:09:34.684097    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:09:34.684108    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:09:34.684357    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:09:34.684487    4323 main.go:141] libmachine: (ha-069000) Calling .GetState
	I0422 04:09:34.684585    4323 main.go:141] libmachine: (ha-069000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:09:34.684662    4323 main.go:141] libmachine: (ha-069000) DBG | hyperkit pid from json: 3181
	I0422 04:09:34.685654    4323 status.go:330] ha-069000 host status = "Running" (err=<nil>)
	I0422 04:09:34.685673    4323 host.go:66] Checking if "ha-069000" exists ...
	I0422 04:09:34.685915    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:09:34.685939    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:09:34.694283    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51014
	I0422 04:09:34.694680    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:09:34.695128    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:09:34.695169    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:09:34.695409    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:09:34.695539    4323 main.go:141] libmachine: (ha-069000) Calling .GetIP
	I0422 04:09:34.701801    4323 host.go:66] Checking if "ha-069000" exists ...
	I0422 04:09:34.702074    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:09:34.702098    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:09:34.710469    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51016
	I0422 04:09:34.710773    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:09:34.711091    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:09:34.711102    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:09:34.711337    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:09:34.711506    4323 main.go:141] libmachine: (ha-069000) Calling .DriverName
	I0422 04:09:34.711718    4323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:09:34.711735    4323 main.go:141] libmachine: (ha-069000) Calling .GetSSHHostname
	I0422 04:09:34.711859    4323 main.go:141] libmachine: (ha-069000) Calling .GetSSHPort
	I0422 04:09:34.711942    4323 main.go:141] libmachine: (ha-069000) Calling .GetSSHKeyPath
	I0422 04:09:34.712048    4323 main.go:141] libmachine: (ha-069000) Calling .GetSSHUsername
	I0422 04:09:34.712126    4323 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000/id_rsa Username:docker}
	W0422 04:10:49.714990    4323 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.6:22: connect: operation timed out
	W0422 04:10:49.715070    4323 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0422 04:10:49.715085    4323 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	I0422 04:10:49.715097    4323 status.go:257] ha-069000 status: &{Name:ha-069000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 04:10:49.715112    4323 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	I0422 04:10:49.715124    4323 status.go:255] checking status of ha-069000-m02 ...
	I0422 04:10:49.715501    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:10:49.715533    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:10:49.724147    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51019
	I0422 04:10:49.724502    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:10:49.724826    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:10:49.724836    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:10:49.725065    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:10:49.725189    4323 main.go:141] libmachine: (ha-069000-m02) Calling .GetState
	I0422 04:10:49.725275    4323 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:10:49.725363    4323 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 3228
	I0422 04:10:49.726325    4323 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid 3228 missing from process table
	I0422 04:10:49.726362    4323 status.go:330] ha-069000-m02 host status = "Stopped" (err=<nil>)
	I0422 04:10:49.726370    4323 status.go:343] host is not running, skipping remaining checks
	I0422 04:10:49.726377    4323 status.go:257] ha-069000-m02 status: &{Name:ha-069000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 04:10:49.726393    4323 status.go:255] checking status of ha-069000-m03 ...
	I0422 04:10:49.726642    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:10:49.726661    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:10:49.735228    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51021
	I0422 04:10:49.735576    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:10:49.735954    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:10:49.735971    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:10:49.736164    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:10:49.736295    4323 main.go:141] libmachine: (ha-069000-m03) Calling .GetState
	I0422 04:10:49.736376    4323 main.go:141] libmachine: (ha-069000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:10:49.736461    4323 main.go:141] libmachine: (ha-069000-m03) DBG | hyperkit pid from json: 3336
	I0422 04:10:49.737427    4323 status.go:330] ha-069000-m03 host status = "Running" (err=<nil>)
	I0422 04:10:49.737435    4323 host.go:66] Checking if "ha-069000-m03" exists ...
	I0422 04:10:49.737681    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:10:49.737701    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:10:49.746353    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51023
	I0422 04:10:49.746706    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:10:49.747041    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:10:49.747059    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:10:49.747289    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:10:49.747403    4323 main.go:141] libmachine: (ha-069000-m03) Calling .GetIP
	I0422 04:10:49.747497    4323 host.go:66] Checking if "ha-069000-m03" exists ...
	I0422 04:10:49.747755    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:10:49.747790    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:10:49.756361    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51025
	I0422 04:10:49.756691    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:10:49.757013    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:10:49.757024    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:10:49.757251    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:10:49.757364    4323 main.go:141] libmachine: (ha-069000-m03) Calling .DriverName
	I0422 04:10:49.757482    4323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:10:49.757493    4323 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHHostname
	I0422 04:10:49.757565    4323 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHPort
	I0422 04:10:49.757663    4323 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHKeyPath
	I0422 04:10:49.757756    4323 main.go:141] libmachine: (ha-069000-m03) Calling .GetSSHUsername
	I0422 04:10:49.757838    4323 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m03/id_rsa Username:docker}
	W0422 04:12:04.760768    4323 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.8:22: connect: operation timed out
	W0422 04:12:04.760880    4323 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.8:22: connect: operation timed out
	E0422 04:12:04.760895    4323 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.8:22: connect: operation timed out
	I0422 04:12:04.760905    4323 status.go:257] ha-069000-m03 status: &{Name:ha-069000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 04:12:04.760921    4323 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.8:22: connect: operation timed out
	I0422 04:12:04.760928    4323 status.go:255] checking status of ha-069000-m04 ...
	I0422 04:12:04.761288    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:12:04.761322    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:12:04.770436    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0422 04:12:04.770760    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:12:04.771069    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:12:04.771078    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:12:04.771313    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:12:04.771438    4323 main.go:141] libmachine: (ha-069000-m04) Calling .GetState
	I0422 04:12:04.771536    4323 main.go:141] libmachine: (ha-069000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:12:04.771618    4323 main.go:141] libmachine: (ha-069000-m04) DBG | hyperkit pid from json: 3553
	I0422 04:12:04.772602    4323 status.go:330] ha-069000-m04 host status = "Running" (err=<nil>)
	I0422 04:12:04.772611    4323 host.go:66] Checking if "ha-069000-m04" exists ...
	I0422 04:12:04.772865    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:12:04.772909    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:12:04.781567    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0422 04:12:04.781885    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:12:04.782186    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:12:04.782203    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:12:04.782428    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:12:04.782541    4323 main.go:141] libmachine: (ha-069000-m04) Calling .GetIP
	I0422 04:12:04.782627    4323 host.go:66] Checking if "ha-069000-m04" exists ...
	I0422 04:12:04.782919    4323 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:12:04.782948    4323 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:12:04.791450    4323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0422 04:12:04.791771    4323 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:12:04.792121    4323 main.go:141] libmachine: Using API Version  1
	I0422 04:12:04.792137    4323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:12:04.792363    4323 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:12:04.792473    4323 main.go:141] libmachine: (ha-069000-m04) Calling .DriverName
	I0422 04:12:04.792597    4323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:12:04.792609    4323 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHHostname
	I0422 04:12:04.792680    4323 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHPort
	I0422 04:12:04.792793    4323 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHKeyPath
	I0422 04:12:04.792876    4323 main.go:141] libmachine: (ha-069000-m04) Calling .GetSSHUsername
	I0422 04:12:04.792951    4323 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m04/id_rsa Username:docker}
	W0422 04:13:19.796385    4323 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.9:22: connect: operation timed out
	W0422 04:13:19.796452    4323 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	E0422 04:13:19.796468    4323 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	I0422 04:13:19.796481    4323 status.go:257] ha-069000-m04 status: &{Name:ha-069000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0422 04:13:19.796497    4323 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr": ha-069000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-069000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-069000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-069000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr": ha-069000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-069000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-069000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-069000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr": ha-069000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-069000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-069000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-069000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000
E0422 04:13:44.384152    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000: exit status 3 (1m15.0970478s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 04:14:34.896124    4562 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0422 04:14:34.896149    4562 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-069000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (383.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (227.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0422 04:14:53.334013    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2m32.484847709s)
ha_test.go:413: expected profile "ha-069000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-069000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-069000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\
"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-069000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.8\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.9\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb
\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"Mo
untPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000
E0422 04:17:56.316465    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000: exit status 3 (1m15.097985858s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 04:18:22.402358    4792 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0422 04:18:22.402386    4792 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-069000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (227.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (221.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 node start m02 -v=7 --alsologtostderr
E0422 04:18:44.309850    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 04:19:53.255101    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 node start m02 -v=7 --alsologtostderr: signal: killed (1m36.165556893s)

                                                
                                                
-- stdout --
	* Starting "ha-069000-m02" control-plane node in "ha-069000" cluster
	* Restarting existing hyperkit VM for "ha-069000-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 04:18:22.466434    4864 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:18:22.466757    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:18:22.466763    4864 out.go:304] Setting ErrFile to fd 2...
	I0422 04:18:22.466767    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:18:22.466956    4864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:18:22.467315    4864 mustload.go:65] Loading cluster: ha-069000
	I0422 04:18:22.467637    4864 config.go:182] Loaded profile config "ha-069000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:18:22.468061    4864 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:18:22.468092    4864 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:18:22.476516    4864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51065
	I0422 04:18:22.476972    4864 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:18:22.477448    4864 main.go:141] libmachine: Using API Version  1
	I0422 04:18:22.477457    4864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:18:22.477687    4864 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:18:22.477804    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetState
	I0422 04:18:22.477888    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:18:22.477981    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 3228
	I0422 04:18:22.478946    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid 3228 missing from process table
	W0422 04:18:22.479018    4864 host.go:58] "ha-069000-m02" host status: Stopped
	I0422 04:18:22.532400    4864 out.go:177] * Starting "ha-069000-m02" control-plane node in "ha-069000" cluster
	I0422 04:18:22.553411    4864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 04:18:22.553469    4864 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0422 04:18:22.553497    4864 cache.go:56] Caching tarball of preloaded images
	I0422 04:18:22.553688    4864 preload.go:173] Found /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0422 04:18:22.553703    4864 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 04:18:22.553842    4864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/ha-069000/config.json ...
	I0422 04:18:22.554464    4864 start.go:360] acquireMachinesLock for ha-069000-m02: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 04:18:22.554560    4864 start.go:364] duration metric: took 65.649µs to acquireMachinesLock for "ha-069000-m02"
	I0422 04:18:22.554581    4864 start.go:96] Skipping create...Using existing machine configuration
	I0422 04:18:22.554593    4864 fix.go:54] fixHost starting: m02
	I0422 04:18:22.554998    4864 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:18:22.555024    4864 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:18:22.563573    4864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51067
	I0422 04:18:22.563898    4864 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:18:22.564245    4864 main.go:141] libmachine: Using API Version  1
	I0422 04:18:22.564267    4864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:18:22.564473    4864 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:18:22.564602    4864 main.go:141] libmachine: (ha-069000-m02) Calling .DriverName
	I0422 04:18:22.564730    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetState
	I0422 04:18:22.564832    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:18:22.564905    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 3228
	I0422 04:18:22.565867    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid 3228 missing from process table
	I0422 04:18:22.565910    4864 fix.go:112] recreateIfNeeded on ha-069000-m02: state=Stopped err=<nil>
	I0422 04:18:22.565930    4864 main.go:141] libmachine: (ha-069000-m02) Calling .DriverName
	W0422 04:18:22.566022    4864 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 04:18:22.589113    4864 out.go:177] * Restarting existing hyperkit VM for "ha-069000-m02" ...
	I0422 04:18:22.609343    4864 main.go:141] libmachine: (ha-069000-m02) Calling .Start
	I0422 04:18:22.609572    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:18:22.609653    4864 main.go:141] libmachine: (ha-069000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid
	I0422 04:18:22.611568    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid 3228 missing from process table
	I0422 04:18:22.611579    4864 main.go:141] libmachine: (ha-069000-m02) DBG | pid 3228 is in state "Stopped"
	I0422 04:18:22.611596    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid...
	I0422 04:18:22.612251    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Using UUID 9381760d-797b-49c1-8862-eb8caf624dda
	I0422 04:18:22.639577    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Generated MAC c6:dd:3d:cf:f0:d2
	I0422 04:18:22.639604    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-069000
	I0422 04:18:22.639731    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9381760d-797b-49c1-8862-eb8caf624dda", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0422 04:18:22.639758    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9381760d-797b-49c1-8862-eb8caf624dda", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0422 04:18:22.639819    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9381760d-797b-49c1-8862-eb8caf624dda", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/ha-069000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machine
s/ha-069000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-069000"}
	I0422 04:18:22.639866    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9381760d-797b-49c1-8862-eb8caf624dda -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/ha-069000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-069000"
	I0422 04:18:22.639877    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0422 04:18:22.641349    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Pid is 4868
	I0422 04:18:22.641846    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Attempt 0
	I0422 04:18:22.641862    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:18:22.641931    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 4868
	I0422 04:18:22.643799    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Searching for c6:dd:3d:cf:f0:d2 in /var/db/dhcpd_leases ...
	I0422 04:18:22.643885    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I0422 04:18:22.643906    4864 main.go:141] libmachine: (ha-069000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:26:70:e3:26:68:f0 ID:1,26:70:e3:26:68:f0 Lease:0x6627941f}
	I0422 04:18:22.643993    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetConfigRaw
	I0422 04:18:22.644026    4864 main.go:141] libmachine: (ha-069000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:96:fd:92:82:5b:dc ID:1,96:fd:92:82:5b:dc Lease:0x6627935e}
	I0422 04:18:22.644080    4864 main.go:141] libmachine: (ha-069000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:c6:dd:3d:cf:f0:d2 ID:1,c6:dd:3d:cf:f0:d2 Lease:0x66279293}
	I0422 04:18:22.644092    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Found match: c6:dd:3d:cf:f0:d2
	I0422 04:18:22.644110    4864 main.go:141] libmachine: (ha-069000-m02) DBG | IP: 192.169.0.7
	I0422 04:18:22.645063    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetIP
	I0422 04:18:22.645313    4864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/ha-069000/config.json ...
	I0422 04:18:22.645936    4864 machine.go:94] provisionDockerMachine start ...
	I0422 04:18:22.645947    4864 main.go:141] libmachine: (ha-069000-m02) Calling .DriverName
	I0422 04:18:22.646080    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHHostname
	I0422 04:18:22.646185    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHPort
	I0422 04:18:22.646291    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHKeyPath
	I0422 04:18:22.646409    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHKeyPath
	I0422 04:18:22.646500    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHUsername
	I0422 04:18:22.646844    4864 main.go:141] libmachine: Using SSH client type: native
	I0422 04:18:22.647062    4864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb346b80] 0xb3498e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0422 04:18:22.647070    4864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 04:18:22.650847    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0422 04:18:22.660541    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0422 04:18:22.662417    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:18:22.662444    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:18:22.662457    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:18:22.662473    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:18:23.058897    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0422 04:18:23.058913    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0422 04:18:23.173651    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:18:23.173676    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:18:23.173685    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:18:23.173691    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:18:23.174541    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0422 04:18:23.174549    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0422 04:18:28.815169    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0422 04:18:28.815237    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0422 04:18:28.815247    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0422 04:18:28.838987    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0422 04:19:37.648158    4864 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.7:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:422: I0422 04:18:22.466434    4864 out.go:291] Setting OutFile to fd 1 ...
I0422 04:18:22.466757    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 04:18:22.466763    4864 out.go:304] Setting ErrFile to fd 2...
I0422 04:18:22.466767    4864 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 04:18:22.466956    4864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
I0422 04:18:22.467315    4864 mustload.go:65] Loading cluster: ha-069000
I0422 04:18:22.467637    4864 config.go:182] Loaded profile config "ha-069000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 04:18:22.468061    4864 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 04:18:22.468092    4864 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 04:18:22.476516    4864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51065
I0422 04:18:22.476972    4864 main.go:141] libmachine: () Calling .GetVersion
I0422 04:18:22.477448    4864 main.go:141] libmachine: Using API Version  1
I0422 04:18:22.477457    4864 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 04:18:22.477687    4864 main.go:141] libmachine: () Calling .GetMachineName
I0422 04:18:22.477804    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetState
I0422 04:18:22.477888    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 04:18:22.477981    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 3228
I0422 04:18:22.478946    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid 3228 missing from process table
W0422 04:18:22.479018    4864 host.go:58] "ha-069000-m02" host status: Stopped
I0422 04:18:22.532400    4864 out.go:177] * Starting "ha-069000-m02" control-plane node in "ha-069000" cluster
I0422 04:18:22.553411    4864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0422 04:18:22.553469    4864 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0422 04:18:22.553497    4864 cache.go:56] Caching tarball of preloaded images
I0422 04:18:22.553688    4864 preload.go:173] Found /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0422 04:18:22.553703    4864 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0422 04:18:22.553842    4864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/ha-069000/config.json ...
I0422 04:18:22.554464    4864 start.go:360] acquireMachinesLock for ha-069000-m02: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0422 04:18:22.554560    4864 start.go:364] duration metric: took 65.649µs to acquireMachinesLock for "ha-069000-m02"
I0422 04:18:22.554581    4864 start.go:96] Skipping create...Using existing machine configuration
I0422 04:18:22.554593    4864 fix.go:54] fixHost starting: m02
I0422 04:18:22.554998    4864 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 04:18:22.555024    4864 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 04:18:22.563573    4864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51067
I0422 04:18:22.563898    4864 main.go:141] libmachine: () Calling .GetVersion
I0422 04:18:22.564245    4864 main.go:141] libmachine: Using API Version  1
I0422 04:18:22.564267    4864 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 04:18:22.564473    4864 main.go:141] libmachine: () Calling .GetMachineName
I0422 04:18:22.564602    4864 main.go:141] libmachine: (ha-069000-m02) Calling .DriverName
I0422 04:18:22.564730    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetState
I0422 04:18:22.564832    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 04:18:22.564905    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 3228
I0422 04:18:22.565867    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid 3228 missing from process table
I0422 04:18:22.565910    4864 fix.go:112] recreateIfNeeded on ha-069000-m02: state=Stopped err=<nil>
I0422 04:18:22.565930    4864 main.go:141] libmachine: (ha-069000-m02) Calling .DriverName
W0422 04:18:22.566022    4864 fix.go:138] unexpected machine state, will restart: <nil>
I0422 04:18:22.589113    4864 out.go:177] * Restarting existing hyperkit VM for "ha-069000-m02" ...
I0422 04:18:22.609343    4864 main.go:141] libmachine: (ha-069000-m02) Calling .Start
I0422 04:18:22.609572    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 04:18:22.609653    4864 main.go:141] libmachine: (ha-069000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid
I0422 04:18:22.611568    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid 3228 missing from process table
I0422 04:18:22.611579    4864 main.go:141] libmachine: (ha-069000-m02) DBG | pid 3228 is in state "Stopped"
I0422 04:18:22.611596    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid...
I0422 04:18:22.612251    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Using UUID 9381760d-797b-49c1-8862-eb8caf624dda
I0422 04:18:22.639577    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Generated MAC c6:dd:3d:cf:f0:d2
I0422 04:18:22.639604    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-069000
I0422 04:18:22.639731    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9381760d-797b-49c1-8862-eb8caf624dda", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0422 04:18:22.639758    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9381760d-797b-49c1-8862-eb8caf624dda", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0422 04:18:22.639819    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9381760d-797b-49c1-8862-eb8caf624dda", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/ha-069000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines
/ha-069000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-069000"}
I0422 04:18:22.639866    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9381760d-797b-49c1-8862-eb8caf624dda -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/ha-069000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/initrd,earlyprintk=serial loglevel=3 console=tt
yS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-069000"
I0422 04:18:22.639877    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0422 04:18:22.641349    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 DEBUG: hyperkit: Pid is 4868
I0422 04:18:22.641846    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Attempt 0
I0422 04:18:22.641862    4864 main.go:141] libmachine: (ha-069000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 04:18:22.641931    4864 main.go:141] libmachine: (ha-069000-m02) DBG | hyperkit pid from json: 4868
I0422 04:18:22.643799    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Searching for c6:dd:3d:cf:f0:d2 in /var/db/dhcpd_leases ...
I0422 04:18:22.643885    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Found 8 entries in /var/db/dhcpd_leases!
I0422 04:18:22.643906    4864 main.go:141] libmachine: (ha-069000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:26:70:e3:26:68:f0 ID:1,26:70:e3:26:68:f0 Lease:0x6627941f}
I0422 04:18:22.643993    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetConfigRaw
I0422 04:18:22.644026    4864 main.go:141] libmachine: (ha-069000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:96:fd:92:82:5b:dc ID:1,96:fd:92:82:5b:dc Lease:0x6627935e}
I0422 04:18:22.644080    4864 main.go:141] libmachine: (ha-069000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:c6:dd:3d:cf:f0:d2 ID:1,c6:dd:3d:cf:f0:d2 Lease:0x66279293}
I0422 04:18:22.644092    4864 main.go:141] libmachine: (ha-069000-m02) DBG | Found match: c6:dd:3d:cf:f0:d2
I0422 04:18:22.644110    4864 main.go:141] libmachine: (ha-069000-m02) DBG | IP: 192.169.0.7
I0422 04:18:22.645063    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetIP
I0422 04:18:22.645313    4864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/ha-069000/config.json ...
I0422 04:18:22.645936    4864 machine.go:94] provisionDockerMachine start ...
I0422 04:18:22.645947    4864 main.go:141] libmachine: (ha-069000-m02) Calling .DriverName
I0422 04:18:22.646080    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHHostname
I0422 04:18:22.646185    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHPort
I0422 04:18:22.646291    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHKeyPath
I0422 04:18:22.646409    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHKeyPath
I0422 04:18:22.646500    4864 main.go:141] libmachine: (ha-069000-m02) Calling .GetSSHUsername
I0422 04:18:22.646844    4864 main.go:141] libmachine: Using SSH client type: native
I0422 04:18:22.647062    4864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb346b80] 0xb3498e0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
I0422 04:18:22.647070    4864 main.go:141] libmachine: About to run SSH command:
hostname
I0422 04:18:22.650847    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0422 04:18:22.660541    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/ha-069000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0422 04:18:22.662417    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0422 04:18:22.662444    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0422 04:18:22.662457    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0422 04:18:22.662473    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0422 04:18:23.058897    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0422 04:18:23.058913    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0422 04:18:23.173651    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0422 04:18:23.173676    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0422 04:18:23.173685    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0422 04:18:23.173691    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0422 04:18:23.174541    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0422 04:18:23.174549    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0422 04:18:28.815169    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0422 04:18:28.815237    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0422 04:18:28.815247    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0422 04:18:28.838987    4864 main.go:141] libmachine: (ha-069000-m02) DBG | 2024/04/22 04:18:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0422 04:19:37.648158    4864 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.7:22: connect: operation timed out
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-amd64 -p ha-069000 node start m02 -v=7 --alsologtostderr": signal: killed
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (707ns)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (2.364µs)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (1.367µs)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (830ns)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (1.337µs)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (906ns)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (1.518µs)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (922ns)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr: context deadline exceeded (22.727µs)
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-069000 -n ha-069000: exit status 3 (1m15.097949512s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 04:22:04.163103    4991 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out
	E0422 04:22:04.163117    4991 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.6:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-069000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (221.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (144.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0422 04:38:44.316833    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 04:39:53.345543    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (2m21.349587334s)

                                                
                                                
-- stdout --
	* [multinode-449000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-449000" primary control-plane node in "multinode-449000" cluster
	* Restarting existing hyperkit VM for "multinode-449000" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-449000-m02" worker node in "multinode-449000" cluster
	* Restarting existing hyperkit VM for "multinode-449000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 04:38:10.248163    6416 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:38:10.248364    6416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:38:10.248370    6416 out.go:304] Setting ErrFile to fd 2...
	I0422 04:38:10.248373    6416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:38:10.248551    6416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:38:10.249993    6416 out.go:298] Setting JSON to false
	I0422 04:38:10.272166    6416 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4060,"bootTime":1713781830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 04:38:10.272260    6416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 04:38:10.294339    6416 out.go:177] * [multinode-449000] minikube v1.33.0 on Darwin 14.4.1
	I0422 04:38:10.337130    6416 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 04:38:10.337190    6416 notify.go:220] Checking for updates...
	I0422 04:38:10.359049    6416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:10.379944    6416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 04:38:10.422063    6416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 04:38:10.442840    6416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	I0422 04:38:10.463898    6416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 04:38:10.485993    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:10.486650    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.486738    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.496755    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52202
	I0422 04:38:10.497088    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.497505    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.497514    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.497724    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.497841    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.498035    6416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 04:38:10.498265    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.498287    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.506538    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52204
	I0422 04:38:10.506852    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.507183    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.507192    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.507441    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.507612    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.536047    6416 out.go:177] * Using the hyperkit driver based on existing profile
	I0422 04:38:10.557148    6416 start.go:297] selected driver: hyperkit
	I0422 04:38:10.557176    6416 start.go:901] validating driver "hyperkit" against &{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:multinode-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0422 04:38:10.557446    6416 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 04:38:10.557634    6416 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 04:38:10.557843    6416 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18711-1033/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0422 04:38:10.567230    6416 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0422 04:38:10.571069    6416 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.571104    6416 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0422 04:38:10.573692    6416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 04:38:10.573749    6416 cni.go:84] Creating CNI manager for ""
	I0422 04:38:10.573757    6416 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0422 04:38:10.573831    6416 start.go:340] cluster config:
	{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-449000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 04:38:10.573922    6416 iso.go:125] acquiring lock: {Name:mk174d786084574fba345b763762a2b8adb514c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 04:38:10.616012    6416 out.go:177] * Starting "multinode-449000" primary control-plane node in "multinode-449000" cluster
	I0422 04:38:10.637091    6416 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 04:38:10.637190    6416 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0422 04:38:10.637216    6416 cache.go:56] Caching tarball of preloaded images
	I0422 04:38:10.637410    6416 preload.go:173] Found /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0422 04:38:10.637428    6416 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 04:38:10.637608    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:10.638476    6416 start.go:360] acquireMachinesLock for multinode-449000: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 04:38:10.638592    6416 start.go:364] duration metric: took 92.843µs to acquireMachinesLock for "multinode-449000"
	I0422 04:38:10.638625    6416 start.go:96] Skipping create...Using existing machine configuration
	I0422 04:38:10.638642    6416 fix.go:54] fixHost starting: 
	I0422 04:38:10.639054    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.639115    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.648338    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52206
	I0422 04:38:10.648728    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.649122    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.649138    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.649380    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.649549    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.649663    6416 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I0422 04:38:10.649749    6416 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.649830    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 6245
	I0422 04:38:10.650803    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 6245 missing from process table
	I0422 04:38:10.650860    6416 fix.go:112] recreateIfNeeded on multinode-449000: state=Stopped err=<nil>
	I0422 04:38:10.650884    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	W0422 04:38:10.650971    6416 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 04:38:10.692813    6416 out.go:177] * Restarting existing hyperkit VM for "multinode-449000" ...
	I0422 04:38:10.715060    6416 main.go:141] libmachine: (multinode-449000) Calling .Start
	I0422 04:38:10.715338    6416 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.715396    6416 main.go:141] libmachine: (multinode-449000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid
	I0422 04:38:10.717236    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 6245 missing from process table
	I0422 04:38:10.717259    6416 main.go:141] libmachine: (multinode-449000) DBG | pid 6245 is in state "Stopped"
	I0422 04:38:10.717293    6416 main.go:141] libmachine: (multinode-449000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid...
	I0422 04:38:10.717482    6416 main.go:141] libmachine: (multinode-449000) DBG | Using UUID 586ad748-6be9-44d4-8ddd-2786953ca4c9
	I0422 04:38:10.827549    6416 main.go:141] libmachine: (multinode-449000) DBG | Generated MAC 3e:5c:84:88:5b:2b
	I0422 04:38:10.827575    6416 main.go:141] libmachine: (multinode-449000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000
	I0422 04:38:10.827703    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"586ad748-6be9-44d4-8ddd-2786953ca4c9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b15c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0422 04:38:10.827733    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"586ad748-6be9-44d4-8ddd-2786953ca4c9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b15c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0422 04:38:10.827793    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "586ad748-6be9-44d4-8ddd-2786953ca4c9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/multinode-449000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/1871
1-1033/.minikube/machines/multinode-449000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"}
	I0422 04:38:10.827825    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 586ad748-6be9-44d4-8ddd-2786953ca4c9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/multinode-449000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"
	I0422 04:38:10.827841    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0422 04:38:10.829342    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Pid is 6429
	I0422 04:38:10.829707    6416 main.go:141] libmachine: (multinode-449000) DBG | Attempt 0
	I0422 04:38:10.829720    6416 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.829787    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 6429
	I0422 04:38:10.831421    6416 main.go:141] libmachine: (multinode-449000) DBG | Searching for 3e:5c:84:88:5b:2b in /var/db/dhcpd_leases ...
	I0422 04:38:10.831501    6416 main.go:141] libmachine: (multinode-449000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0422 04:38:10.831518    6416 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 04:38:10.831540    6416 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279d43}
	I0422 04:38:10.831555    6416 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279ca6}
	I0422 04:38:10.831562    6416 main.go:141] libmachine: (multinode-449000) DBG | Found match: 3e:5c:84:88:5b:2b
	I0422 04:38:10.831566    6416 main.go:141] libmachine: (multinode-449000) DBG | IP: 192.169.0.16
	I0422 04:38:10.831599    6416 main.go:141] libmachine: (multinode-449000) Calling .GetConfigRaw
	I0422 04:38:10.832231    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:10.832383    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:10.832765    6416 machine.go:94] provisionDockerMachine start ...
	I0422 04:38:10.832776    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.832900    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:10.832988    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:10.833079    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:10.833169    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:10.833261    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:10.833384    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:10.833572    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:10.833579    6416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 04:38:10.837041    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0422 04:38:10.890624    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0422 04:38:10.891322    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:10.891342    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:10.891353    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:10.891361    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:11.268602    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0422 04:38:11.268615    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0422 04:38:11.383528    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:11.383546    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:11.383558    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:11.383571    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:11.384537    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0422 04:38:11.384549    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0422 04:38:16.643459    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0422 04:38:16.643515    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0422 04:38:16.643527    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0422 04:38:16.667459    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0422 04:38:21.903644    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 04:38:21.903661    6416 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I0422 04:38:21.903793    6416 buildroot.go:166] provisioning hostname "multinode-449000"
	I0422 04:38:21.903802    6416 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I0422 04:38:21.903888    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:21.903992    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:21.904101    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.904188    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.904320    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:21.904442    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:21.904588    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:21.904600    6416 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000 && echo "multinode-449000" | sudo tee /etc/hostname
	I0422 04:38:21.971569    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000
	
	I0422 04:38:21.971595    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:21.971731    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:21.971831    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.971922    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.972011    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:21.972141    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:21.972284    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:21.972295    6416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 04:38:22.037323    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 04:38:22.037350    6416 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18711-1033/.minikube CaCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18711-1033/.minikube}
	I0422 04:38:22.037367    6416 buildroot.go:174] setting up certificates
	I0422 04:38:22.037374    6416 provision.go:84] configureAuth start
	I0422 04:38:22.037380    6416 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I0422 04:38:22.037516    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:22.037614    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.037712    6416 provision.go:143] copyHostCerts
	I0422 04:38:22.037744    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:38:22.037812    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem, removing ...
	I0422 04:38:22.037820    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:38:22.037947    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem (1082 bytes)
	I0422 04:38:22.038158    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:38:22.038199    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem, removing ...
	I0422 04:38:22.038204    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:38:22.038293    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem (1123 bytes)
	I0422 04:38:22.038447    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:38:22.038487    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem, removing ...
	I0422 04:38:22.038492    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:38:22.038571    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem (1675 bytes)
	I0422 04:38:22.038729    6416 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem org=jenkins.multinode-449000 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-449000]
	I0422 04:38:22.288976    6416 provision.go:177] copyRemoteCerts
	I0422 04:38:22.289045    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 04:38:22.289061    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.289250    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.289387    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.289552    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.289728    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:22.326188    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 04:38:22.326259    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0422 04:38:22.345869    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 04:38:22.345939    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0422 04:38:22.365184    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 04:38:22.365245    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 04:38:22.384572    6416 provision.go:87] duration metric: took 347.183732ms to configureAuth
	I0422 04:38:22.384586    6416 buildroot.go:189] setting minikube options for container-runtime
	I0422 04:38:22.384747    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:22.384780    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:22.384915    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.385016    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.385092    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.385179    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.385267    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.385392    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:22.385591    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:22.385600    6416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 04:38:22.442573    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0422 04:38:22.442585    6416 buildroot.go:70] root file system type: tmpfs
	I0422 04:38:22.442651    6416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 04:38:22.442664    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.442789    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.442871    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.442958    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.443072    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.443225    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:22.443357    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:22.443405    6416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 04:38:22.512640    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 04:38:22.512660    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.512796    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.512899    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.512984    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.513080    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.513216    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:22.513363    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:22.513377    6416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 04:38:24.188655    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0422 04:38:24.188669    6416 machine.go:97] duration metric: took 13.355824894s to provisionDockerMachine
	I0422 04:38:24.188682    6416 start.go:293] postStartSetup for "multinode-449000" (driver="hyperkit")
	I0422 04:38:24.188690    6416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 04:38:24.188702    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.188878    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 04:38:24.188902    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.189005    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.189091    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.189171    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.189261    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:24.226328    6416 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 04:38:24.229232    6416 command_runner.go:130] > NAME=Buildroot
	I0422 04:38:24.229244    6416 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 04:38:24.229250    6416 command_runner.go:130] > ID=buildroot
	I0422 04:38:24.229257    6416 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 04:38:24.229265    6416 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 04:38:24.229393    6416 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 04:38:24.229405    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/addons for local assets ...
	I0422 04:38:24.229504    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/files for local assets ...
	I0422 04:38:24.229694    6416 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> 14842.pem in /etc/ssl/certs
	I0422 04:38:24.229700    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> /etc/ssl/certs/14842.pem
	I0422 04:38:24.229905    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 04:38:24.237775    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /etc/ssl/certs/14842.pem (1708 bytes)
	I0422 04:38:24.256541    6416 start.go:296] duration metric: took 67.851408ms for postStartSetup
	I0422 04:38:24.256563    6416 fix.go:56] duration metric: took 13.617856509s for fixHost
	I0422 04:38:24.256575    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.256706    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.256802    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.256895    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.256967    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.257074    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:24.257215    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:24.257222    6416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 04:38:24.315363    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713785904.473126148
	
	I0422 04:38:24.315375    6416 fix.go:216] guest clock: 1713785904.473126148
	I0422 04:38:24.315380    6416 fix.go:229] Guest: 2024-04-22 04:38:24.473126148 -0700 PDT Remote: 2024-04-22 04:38:24.256566 -0700 PDT m=+14.050727463 (delta=216.560148ms)
	I0422 04:38:24.315396    6416 fix.go:200] guest clock delta is within tolerance: 216.560148ms
	I0422 04:38:24.315401    6416 start.go:83] releasing machines lock for "multinode-449000", held for 13.676725524s
	I0422 04:38:24.315421    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.315568    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:24.315664    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.316019    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.316120    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.316191    6416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 04:38:24.316222    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.316257    6416 ssh_runner.go:195] Run: cat /version.json
	I0422 04:38:24.316268    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.316316    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.316353    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.316410    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.316439    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.316486    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.316525    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.316572    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:24.316620    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:24.348077    6416 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0422 04:38:24.348180    6416 ssh_runner.go:195] Run: systemctl --version
	I0422 04:38:24.396154    6416 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 04:38:24.396617    6416 command_runner.go:130] > systemd 252 (252)
	I0422 04:38:24.396654    6416 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0422 04:38:24.396765    6416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 04:38:24.402095    6416 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 04:38:24.402152    6416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 04:38:24.402190    6416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 04:38:24.414497    6416 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0422 04:38:24.414528    6416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 04:38:24.414535    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:38:24.414635    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:38:24.429342    6416 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0422 04:38:24.429595    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0422 04:38:24.437952    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 04:38:24.446259    6416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 04:38:24.446300    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 04:38:24.454738    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:38:24.463080    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 04:38:24.471637    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:38:24.480009    6416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 04:38:24.488561    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 04:38:24.497065    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 04:38:24.505465    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 04:38:24.514035    6416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 04:38:24.521603    6416 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 04:38:24.521671    6416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 04:38:24.529437    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:24.637449    6416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 04:38:24.655839    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:38:24.655917    6416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 04:38:24.673150    6416 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0422 04:38:24.673163    6416 command_runner.go:130] > [Unit]
	I0422 04:38:24.673169    6416 command_runner.go:130] > Description=Docker Application Container Engine
	I0422 04:38:24.673183    6416 command_runner.go:130] > Documentation=https://docs.docker.com
	I0422 04:38:24.673188    6416 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0422 04:38:24.673192    6416 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0422 04:38:24.673196    6416 command_runner.go:130] > StartLimitBurst=3
	I0422 04:38:24.673200    6416 command_runner.go:130] > StartLimitIntervalSec=60
	I0422 04:38:24.673203    6416 command_runner.go:130] > [Service]
	I0422 04:38:24.673206    6416 command_runner.go:130] > Type=notify
	I0422 04:38:24.673210    6416 command_runner.go:130] > Restart=on-failure
	I0422 04:38:24.673216    6416 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0422 04:38:24.673223    6416 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0422 04:38:24.673230    6416 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0422 04:38:24.673236    6416 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0422 04:38:24.673241    6416 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0422 04:38:24.673247    6416 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0422 04:38:24.673253    6416 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0422 04:38:24.673264    6416 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0422 04:38:24.673270    6416 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0422 04:38:24.673279    6416 command_runner.go:130] > ExecStart=
	I0422 04:38:24.673291    6416 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0422 04:38:24.673296    6416 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0422 04:38:24.673303    6416 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0422 04:38:24.673309    6416 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0422 04:38:24.673312    6416 command_runner.go:130] > LimitNOFILE=infinity
	I0422 04:38:24.673316    6416 command_runner.go:130] > LimitNPROC=infinity
	I0422 04:38:24.673319    6416 command_runner.go:130] > LimitCORE=infinity
	I0422 04:38:24.673324    6416 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0422 04:38:24.673328    6416 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0422 04:38:24.673332    6416 command_runner.go:130] > TasksMax=infinity
	I0422 04:38:24.673335    6416 command_runner.go:130] > TimeoutStartSec=0
	I0422 04:38:24.673341    6416 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0422 04:38:24.673344    6416 command_runner.go:130] > Delegate=yes
	I0422 04:38:24.673349    6416 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0422 04:38:24.673353    6416 command_runner.go:130] > KillMode=process
	I0422 04:38:24.673356    6416 command_runner.go:130] > [Install]
	I0422 04:38:24.673365    6416 command_runner.go:130] > WantedBy=multi-user.target
	I0422 04:38:24.673434    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:38:24.685276    6416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 04:38:24.709796    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:38:24.724576    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:38:24.739589    6416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0422 04:38:24.761051    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:38:24.777401    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:38:24.796782    6416 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0422 04:38:24.797172    6416 ssh_runner.go:195] Run: which cri-dockerd
	I0422 04:38:24.800004    6416 command_runner.go:130] > /usr/bin/cri-dockerd
	I0422 04:38:24.800148    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 04:38:24.808594    6416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 04:38:24.821942    6416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 04:38:24.923982    6416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 04:38:25.041199    6416 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 04:38:25.041277    6416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 04:38:25.055516    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:25.153764    6416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 04:38:27.475204    6416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.321409526s)
	I0422 04:38:27.475263    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0422 04:38:27.486848    6416 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0422 04:38:27.500729    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 04:38:27.511120    6416 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0422 04:38:27.609886    6416 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0422 04:38:27.709696    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:27.818455    6416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0422 04:38:27.832514    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 04:38:27.843827    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:27.946861    6416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0422 04:38:28.005885    6416 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0422 04:38:28.005983    6416 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0422 04:38:28.010314    6416 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0422 04:38:28.010327    6416 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0422 04:38:28.010344    6416 command_runner.go:130] > Device: 0,22	Inode: 757         Links: 1
	I0422 04:38:28.010353    6416 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0422 04:38:28.010358    6416 command_runner.go:130] > Access: 2024-04-22 11:38:28.117421263 +0000
	I0422 04:38:28.010363    6416 command_runner.go:130] > Modify: 2024-04-22 11:38:28.117421263 +0000
	I0422 04:38:28.010368    6416 command_runner.go:130] > Change: 2024-04-22 11:38:28.119421095 +0000
	I0422 04:38:28.010372    6416 command_runner.go:130] >  Birth: -
	I0422 04:38:28.010437    6416 start.go:562] Will wait 60s for crictl version
	I0422 04:38:28.010483    6416 ssh_runner.go:195] Run: which crictl
	I0422 04:38:28.013358    6416 command_runner.go:130] > /usr/bin/crictl
	I0422 04:38:28.013570    6416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 04:38:28.042737    6416 command_runner.go:130] > Version:  0.1.0
	I0422 04:38:28.042763    6416 command_runner.go:130] > RuntimeName:  docker
	I0422 04:38:28.042768    6416 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0422 04:38:28.042772    6416 command_runner.go:130] > RuntimeApiVersion:  v1
	I0422 04:38:28.043795    6416 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0422 04:38:28.043861    6416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 04:38:28.061054    6416 command_runner.go:130] > 26.0.1
	I0422 04:38:28.061844    6416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 04:38:28.077996    6416 command_runner.go:130] > 26.0.1
	I0422 04:38:28.123584    6416 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0422 04:38:28.123633    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:28.124044    6416 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0422 04:38:28.128797    6416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 04:38:28.139430    6416 kubeadm.go:877] updating cluster {Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode
-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metri
cs-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s} ...
	I0422 04:38:28.139513    6416 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 04:38:28.139574    6416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 04:38:28.159585    6416 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0422 04:38:28.159598    6416 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 04:38:28.159601    6416 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0422 04:38:28.159606    6416 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0422 04:38:28.159609    6416 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0422 04:38:28.159613    6416 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0422 04:38:28.159617    6416 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0422 04:38:28.159621    6416 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0422 04:38:28.159625    6416 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 04:38:28.159629    6416 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0422 04:38:28.160212    6416 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0422 04:38:28.160222    6416 docker.go:615] Images already preloaded, skipping extraction
	I0422 04:38:28.160287    6416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 04:38:28.175656    6416 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0422 04:38:28.175672    6416 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 04:38:28.175676    6416 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0422 04:38:28.175680    6416 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0422 04:38:28.175684    6416 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0422 04:38:28.175687    6416 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0422 04:38:28.175693    6416 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0422 04:38:28.175699    6416 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0422 04:38:28.175706    6416 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 04:38:28.175712    6416 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0422 04:38:28.175755    6416 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0422 04:38:28.175768    6416 cache_images.go:84] Images are preloaded, skipping loading
	I0422 04:38:28.175777    6416 kubeadm.go:928] updating node { 192.169.0.16 8443 v1.30.0 docker true true} ...
	I0422 04:38:28.175851    6416 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-449000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 04:38:28.175913    6416 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0422 04:38:28.192840    6416 command_runner.go:130] > cgroupfs
	I0422 04:38:28.193474    6416 cni.go:84] Creating CNI manager for ""
	I0422 04:38:28.193485    6416 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0422 04:38:28.193496    6416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 04:38:28.193512    6416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.16 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-449000 NodeName:multinode-449000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 04:38:28.193598    6416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-449000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 04:38:28.193662    6416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 04:38:28.201773    6416 command_runner.go:130] > kubeadm
	I0422 04:38:28.201781    6416 command_runner.go:130] > kubectl
	I0422 04:38:28.201785    6416 command_runner.go:130] > kubelet
	I0422 04:38:28.201888    6416 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 04:38:28.201931    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 04:38:28.209885    6416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0422 04:38:28.223696    6416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 04:38:28.236998    6416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0422 04:38:28.250595    6416 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0422 04:38:28.253512    6416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 04:38:28.263052    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:28.375325    6416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 04:38:28.390337    6416 certs.go:68] Setting up /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000 for IP: 192.169.0.16
	I0422 04:38:28.390351    6416 certs.go:194] generating shared ca certs ...
	I0422 04:38:28.390365    6416 certs.go:226] acquiring lock for ca certs: {Name:mk61c76ef71e4ac1dee0d1c0b2031f8bdb3ae618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:28.390542    6416 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.key
	I0422 04:38:28.390612    6416 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.key
	I0422 04:38:28.390624    6416 certs.go:256] generating profile certs ...
	I0422 04:38:28.390724    6416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/client.key
	I0422 04:38:28.390806    6416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.key.36931f31
	I0422 04:38:28.390886    6416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.key
	I0422 04:38:28.390893    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 04:38:28.390915    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 04:38:28.390933    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 04:38:28.390951    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 04:38:28.390969    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 04:38:28.390998    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 04:38:28.391026    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 04:38:28.391045    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 04:38:28.391154    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484.pem (1338 bytes)
	W0422 04:38:28.391201    6416 certs.go:480] ignoring /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484_empty.pem, impossibly tiny 0 bytes
	I0422 04:38:28.391209    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 04:38:28.391243    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem (1082 bytes)
	I0422 04:38:28.391280    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem (1123 bytes)
	I0422 04:38:28.391309    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem (1675 bytes)
	I0422 04:38:28.391381    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem (1708 bytes)
	I0422 04:38:28.391416    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.391450    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.391470    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484.pem -> /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.391931    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 04:38:28.433213    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0422 04:38:28.459785    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 04:38:28.482771    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 04:38:28.504810    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 04:38:28.525273    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 04:38:28.545136    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 04:38:28.565757    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 04:38:28.585678    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /usr/share/ca-certificates/14842.pem (1708 bytes)
	I0422 04:38:28.605783    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 04:38:28.625729    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484.pem --> /usr/share/ca-certificates/1484.pem (1338 bytes)
	I0422 04:38:28.645804    6416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 04:38:28.659496    6416 ssh_runner.go:195] Run: openssl version
	I0422 04:38:28.663618    6416 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0422 04:38:28.663762    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14842.pem && ln -fs /usr/share/ca-certificates/14842.pem /etc/ssl/certs/14842.pem"
	I0422 04:38:28.672105    6416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.675446    6416 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 22 10:45 /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.675571    6416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:45 /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.675614    6416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.679705    6416 command_runner.go:130] > 3ec20f2e
	I0422 04:38:28.679843    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 04:38:28.688233    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 04:38:28.696714    6416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.700071    6416 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 22 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.700137    6416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.700171    6416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.704336    6416 command_runner.go:130] > b5213941
	I0422 04:38:28.704507    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 04:38:28.712810    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484.pem && ln -fs /usr/share/ca-certificates/1484.pem /etc/ssl/certs/1484.pem"
	I0422 04:38:28.721043    6416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.724265    6416 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 22 10:45 /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.724345    6416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:45 /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.724381    6416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.728520    6416 command_runner.go:130] > 51391683
	I0422 04:38:28.728643    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1484.pem /etc/ssl/certs/51391683.0"
	I0422 04:38:28.737033    6416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 04:38:28.740271    6416 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 04:38:28.740282    6416 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0422 04:38:28.740286    6416 command_runner.go:130] > Device: 253,1	Inode: 4196178     Links: 1
	I0422 04:38:28.740291    6416 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 04:38:28.740297    6416 command_runner.go:130] > Access: 2024-04-22 11:36:05.475707495 +0000
	I0422 04:38:28.740302    6416 command_runner.go:130] > Modify: 2024-04-22 11:29:04.616277157 +0000
	I0422 04:38:28.740306    6416 command_runner.go:130] > Change: 2024-04-22 11:29:04.616277157 +0000
	I0422 04:38:28.740310    6416 command_runner.go:130] >  Birth: 2024-04-22 11:29:04.615277214 +0000
	I0422 04:38:28.740411    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 04:38:28.744636    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.744743    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 04:38:28.748846    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.748976    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 04:38:28.753077    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.753210    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 04:38:28.757404    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.757528    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 04:38:28.761638    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.761800    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 04:38:28.765925    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.766126    6416 kubeadm.go:391] StartCluster: {Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-44
9000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-
server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0422 04:38:28.766236    6416 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0422 04:38:28.777332    6416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 04:38:28.784688    6416 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0422 04:38:28.784698    6416 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0422 04:38:28.784702    6416 command_runner.go:130] > /var/lib/minikube/etcd:
	I0422 04:38:28.784705    6416 command_runner.go:130] > member
	W0422 04:38:28.784814    6416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 04:38:28.784822    6416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 04:38:28.784829    6416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 04:38:28.784866    6416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 04:38:28.792597    6416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 04:38:28.792898    6416 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:28.792981    6416 kubeconfig.go:62] /Users/jenkins/minikube-integration/18711-1033/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-449000" cluster setting kubeconfig missing "multinode-449000" context setting]
	I0422 04:38:28.793208    6416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18711-1033/kubeconfig: {Name:mkd60fed3a4688e81c1999ca37fdf35eadd19815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:28.793897    6416 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:28.794090    6416 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7e5aa40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 04:38:28.794400    6416 cert_rotation.go:137] Starting client certificate rotation controller
	I0422 04:38:28.794564    6416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 04:38:28.801771    6416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.16
	I0422 04:38:28.801789    6416 kubeadm.go:1154] stopping kube-system containers ...
	I0422 04:38:28.801838    6416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0422 04:38:28.816657    6416 command_runner.go:130] > 7fd342a68d84
	I0422 04:38:28.816667    6416 command_runner.go:130] > c6d63c83b44a
	I0422 04:38:28.816671    6416 command_runner.go:130] > 429b0a81fe65
	I0422 04:38:28.816674    6416 command_runner.go:130] > d5b3b5d5a468
	I0422 04:38:28.816678    6416 command_runner.go:130] > 7ad82cc3e663
	I0422 04:38:28.816681    6416 command_runner.go:130] > 8fd92d3d559f
	I0422 04:38:28.816693    6416 command_runner.go:130] > d272ef1c679e
	I0422 04:38:28.816697    6416 command_runner.go:130] > 8fc5f2d8668e
	I0422 04:38:28.816700    6416 command_runner.go:130] > be4f0b4b588e
	I0422 04:38:28.816704    6416 command_runner.go:130] > 62b5721c79fa
	I0422 04:38:28.816707    6416 command_runner.go:130] > 1df263b70ea2
	I0422 04:38:28.816710    6416 command_runner.go:130] > 8ac986224699
	I0422 04:38:28.816713    6416 command_runner.go:130] > 4cbfdf285d1b
	I0422 04:38:28.816716    6416 command_runner.go:130] > d6f28e2bec07
	I0422 04:38:28.816724    6416 command_runner.go:130] > 46dba4d36ef7
	I0422 04:38:28.816727    6416 command_runner.go:130] > 84c0422896cc
	I0422 04:38:28.816730    6416 command_runner.go:130] > d0dcd3425466
	I0422 04:38:28.816734    6416 command_runner.go:130] > c20333287578
	I0422 04:38:28.816737    6416 command_runner.go:130] > d5f7a23a34fc
	I0422 04:38:28.816741    6416 command_runner.go:130] > f83965b353cb
	I0422 04:38:28.816744    6416 command_runner.go:130] > 8e1ff1cf8fb4
	I0422 04:38:28.816748    6416 command_runner.go:130] > 5a57671878b6
	I0422 04:38:28.816751    6416 command_runner.go:130] > af6978b977fc
	I0422 04:38:28.816755    6416 command_runner.go:130] > 1f77c8f168b4
	I0422 04:38:28.816758    6416 command_runner.go:130] > c2f38fcb314e
	I0422 04:38:28.816762    6416 command_runner.go:130] > 1113d226e35e
	I0422 04:38:28.816765    6416 command_runner.go:130] > 769ad1ec6855
	I0422 04:38:28.816768    6416 command_runner.go:130] > 3874d8a2aa4c
	I0422 04:38:28.816771    6416 command_runner.go:130] > 476f40892e40
	I0422 04:38:28.816775    6416 command_runner.go:130] > 782b924a6d7c
	I0422 04:38:28.816784    6416 command_runner.go:130] > f03a888f78dc
	I0422 04:38:28.817341    6416 docker.go:483] Stopping containers: [7fd342a68d84 c6d63c83b44a 429b0a81fe65 d5b3b5d5a468 7ad82cc3e663 8fd92d3d559f d272ef1c679e 8fc5f2d8668e be4f0b4b588e 62b5721c79fa 1df263b70ea2 8ac986224699 4cbfdf285d1b d6f28e2bec07 46dba4d36ef7 84c0422896cc d0dcd3425466 c20333287578 d5f7a23a34fc f83965b353cb 8e1ff1cf8fb4 5a57671878b6 af6978b977fc 1f77c8f168b4 c2f38fcb314e 1113d226e35e 769ad1ec6855 3874d8a2aa4c 476f40892e40 782b924a6d7c f03a888f78dc]
	I0422 04:38:28.817433    6416 ssh_runner.go:195] Run: docker stop 7fd342a68d84 c6d63c83b44a 429b0a81fe65 d5b3b5d5a468 7ad82cc3e663 8fd92d3d559f d272ef1c679e 8fc5f2d8668e be4f0b4b588e 62b5721c79fa 1df263b70ea2 8ac986224699 4cbfdf285d1b d6f28e2bec07 46dba4d36ef7 84c0422896cc d0dcd3425466 c20333287578 d5f7a23a34fc f83965b353cb 8e1ff1cf8fb4 5a57671878b6 af6978b977fc 1f77c8f168b4 c2f38fcb314e 1113d226e35e 769ad1ec6855 3874d8a2aa4c 476f40892e40 782b924a6d7c f03a888f78dc
	I0422 04:38:28.827936    6416 command_runner.go:130] > 7fd342a68d84
	I0422 04:38:28.828422    6416 command_runner.go:130] > c6d63c83b44a
	I0422 04:38:28.828430    6416 command_runner.go:130] > 429b0a81fe65
	I0422 04:38:28.828434    6416 command_runner.go:130] > d5b3b5d5a468
	I0422 04:38:28.828438    6416 command_runner.go:130] > 7ad82cc3e663
	I0422 04:38:28.828464    6416 command_runner.go:130] > 8fd92d3d559f
	I0422 04:38:28.828982    6416 command_runner.go:130] > d272ef1c679e
	I0422 04:38:28.828988    6416 command_runner.go:130] > 8fc5f2d8668e
	I0422 04:38:28.829668    6416 command_runner.go:130] > be4f0b4b588e
	I0422 04:38:28.832080    6416 command_runner.go:130] > 62b5721c79fa
	I0422 04:38:28.832193    6416 command_runner.go:130] > 1df263b70ea2
	I0422 04:38:28.832198    6416 command_runner.go:130] > 8ac986224699
	I0422 04:38:28.832202    6416 command_runner.go:130] > 4cbfdf285d1b
	I0422 04:38:28.832205    6416 command_runner.go:130] > d6f28e2bec07
	I0422 04:38:28.832209    6416 command_runner.go:130] > 46dba4d36ef7
	I0422 04:38:28.832264    6416 command_runner.go:130] > 84c0422896cc
	I0422 04:38:28.832272    6416 command_runner.go:130] > d0dcd3425466
	I0422 04:38:28.832275    6416 command_runner.go:130] > c20333287578
	I0422 04:38:28.832293    6416 command_runner.go:130] > d5f7a23a34fc
	I0422 04:38:28.832300    6416 command_runner.go:130] > f83965b353cb
	I0422 04:38:28.832303    6416 command_runner.go:130] > 8e1ff1cf8fb4
	I0422 04:38:28.832711    6416 command_runner.go:130] > 5a57671878b6
	I0422 04:38:28.832716    6416 command_runner.go:130] > af6978b977fc
	I0422 04:38:28.832720    6416 command_runner.go:130] > 1f77c8f168b4
	I0422 04:38:28.832723    6416 command_runner.go:130] > c2f38fcb314e
	I0422 04:38:28.832726    6416 command_runner.go:130] > 1113d226e35e
	I0422 04:38:28.832729    6416 command_runner.go:130] > 769ad1ec6855
	I0422 04:38:28.832732    6416 command_runner.go:130] > 3874d8a2aa4c
	I0422 04:38:28.832735    6416 command_runner.go:130] > 476f40892e40
	I0422 04:38:28.832927    6416 command_runner.go:130] > 782b924a6d7c
	I0422 04:38:28.832932    6416 command_runner.go:130] > f03a888f78dc
	I0422 04:38:28.833558    6416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 04:38:28.846289    6416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 04:38:28.853726    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0422 04:38:28.853737    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0422 04:38:28.853744    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0422 04:38:28.853753    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 04:38:28.853806    6416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 04:38:28.853814    6416 kubeadm.go:156] found existing configuration files:
	
	I0422 04:38:28.853856    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 04:38:28.860708    6416 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 04:38:28.860722    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 04:38:28.860761    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 04:38:28.868014    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 04:38:28.874999    6416 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 04:38:28.875063    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 04:38:28.875096    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 04:38:28.882687    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 04:38:28.889607    6416 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 04:38:28.889625    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 04:38:28.889659    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 04:38:28.897098    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 04:38:28.904160    6416 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 04:38:28.904182    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 04:38:28.904219    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 04:38:28.911388    6416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 04:38:28.918978    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:28.983428    6416 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 04:38:28.983464    6416 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0422 04:38:28.983681    6416 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0422 04:38:28.983810    6416 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 04:38:28.984183    6416 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0422 04:38:28.984307    6416 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0422 04:38:28.984715    6416 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0422 04:38:28.984893    6416 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0422 04:38:28.985217    6416 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0422 04:38:28.985282    6416 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 04:38:28.985450    6416 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 04:38:28.986394    6416 command_runner.go:130] > [certs] Using the existing "sa" key
	I0422 04:38:28.986556    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:29.784155    6416 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 04:38:29.784183    6416 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 04:38:29.784214    6416 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 04:38:29.784219    6416 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 04:38:29.784225    6416 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 04:38:29.784230    6416 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 04:38:29.784356    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:29.833794    6416 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 04:38:29.834496    6416 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 04:38:29.834607    6416 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0422 04:38:29.945288    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:30.014615    6416 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 04:38:30.014631    6416 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 04:38:30.016187    6416 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 04:38:30.017381    6416 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 04:38:30.019427    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:30.090821    6416 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 04:38:30.094334    6416 api_server.go:52] waiting for apiserver process to appear ...
	I0422 04:38:30.094395    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:30.596643    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:31.094655    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:31.596495    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:32.094700    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:32.106469    6416 command_runner.go:130] > 1523
	I0422 04:38:32.106682    6416 api_server.go:72] duration metric: took 2.01234244s to wait for apiserver process to appear ...
	I0422 04:38:32.106702    6416 api_server.go:88] waiting for apiserver healthz status ...
	I0422 04:38:32.106719    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:34.210139    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 04:38:34.210157    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 04:38:34.210166    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:34.246095    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 04:38:34.246114    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 04:38:34.607521    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:34.611305    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 04:38:34.611319    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 04:38:35.108108    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:35.112157    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 04:38:35.112169    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 04:38:35.608732    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:35.613635    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0422 04:38:35.613698    6416 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0422 04:38:35.613706    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:35.613713    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:35.613717    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:35.618517    6416 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 04:38:35.618530    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:35.618535    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:35.618538    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:35.618541    6416 round_trippers.go:580]     Content-Length: 263
	I0422 04:38:35.618549    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:35 GMT
	I0422 04:38:35.618552    6416 round_trippers.go:580]     Audit-Id: c529170c-3b23-45b4-b999-02e57985832e
	I0422 04:38:35.618556    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:35.618558    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:35.618581    6416 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0422 04:38:35.618663    6416 api_server.go:141] control plane version: v1.30.0
	I0422 04:38:35.618674    6416 api_server.go:131] duration metric: took 3.511948374s to wait for apiserver health ...
	I0422 04:38:35.618682    6416 cni.go:84] Creating CNI manager for ""
	I0422 04:38:35.618686    6416 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0422 04:38:35.642438    6416 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0422 04:38:35.663231    6416 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0422 04:38:35.668943    6416 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0422 04:38:35.668963    6416 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0422 04:38:35.668972    6416 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0422 04:38:35.669009    6416 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 04:38:35.669017    6416 command_runner.go:130] > Access: 2024-04-22 11:38:20.770719391 +0000
	I0422 04:38:35.669022    6416 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0422 04:38:35.669028    6416 command_runner.go:130] > Change: 2024-04-22 11:38:18.653478647 +0000
	I0422 04:38:35.669031    6416 command_runner.go:130] >  Birth: -
	I0422 04:38:35.669161    6416 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0422 04:38:35.669169    6416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0422 04:38:35.696431    6416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0422 04:38:36.251346    6416 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0422 04:38:36.251361    6416 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0422 04:38:36.251367    6416 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0422 04:38:36.251371    6416 command_runner.go:130] > daemonset.apps/kindnet configured
	I0422 04:38:36.251471    6416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 04:38:36.251526    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:36.251537    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.251547    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.251551    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.255055    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:36.255070    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.255078    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.255098    6416 round_trippers.go:580]     Audit-Id: e5a8b559-1f3e-4cf9-b695-523472bc9bd4
	I0422 04:38:36.255108    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.255112    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.255116    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.255122    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.255972    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1206"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 81186 chars]
	I0422 04:38:36.258779    6416 system_pods.go:59] 11 kube-system pods found
	I0422 04:38:36.258797    6416 system_pods.go:61] "coredns-7db6d8ff4d-tnr9d" [20633bf5-f995-44a1-b778-441b906496cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 04:38:36.258803    6416 system_pods.go:61] "etcd-multinode-449000" [ff3afd40-3400-4293-9fe4-03d22b8aba13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 04:38:36.258808    6416 system_pods.go:61] "kindnet-jkzvq" [1c07681b-b4af-41b9-917c-01183dcd9e7f] Running
	I0422 04:38:36.258812    6416 system_pods.go:61] "kindnet-pbqsb" [f1537c83-ca18-43b9-8fc5-91de97ef1d76] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0422 04:38:36.258817    6416 system_pods.go:61] "kindnet-sm2l6" [9c708c64-7f5e-4502-9381-d97e024ea343] Running
	I0422 04:38:36.258821    6416 system_pods.go:61] "kube-apiserver-multinode-449000" [cc0086bd-2049-4d09-a498-d26cc78b6968] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 04:38:36.258825    6416 system_pods.go:61] "kube-controller-manager-multinode-449000" [7d730ce3-3f6c-4cc8-aff2-bbcf584056c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 04:38:36.258829    6416 system_pods.go:61] "kube-proxy-4q52c" [764856b1-b523-4b58-8a33-6b81ab928c79] Running
	I0422 04:38:36.258833    6416 system_pods.go:61] "kube-proxy-jrtv2" [e6078b93-4180-484d-b486-9ddf193ba84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 04:38:36.258837    6416 system_pods.go:61] "kube-proxy-lx9ft" [38104bb7-7d9e-4377-9912-06cb23591941] Running
	I0422 04:38:36.258840    6416 system_pods.go:61] "storage-provisioner" [f286f444-3ade-4e54-85bb-8577f0234cca] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 04:38:36.258845    6416 system_pods.go:74] duration metric: took 7.366633ms to wait for pod list to return data ...
	I0422 04:38:36.258852    6416 node_conditions.go:102] verifying NodePressure condition ...
	I0422 04:38:36.258887    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0422 04:38:36.258892    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.258898    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.258903    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.261838    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:36.261873    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.261881    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.261885    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.261898    6416 round_trippers.go:580]     Audit-Id: 6d19ca39-035d-4de5-a620-8aec9edb6f3d
	I0422 04:38:36.261904    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.261908    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.261912    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.262077    6416 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1206"},"items":[{"metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0422 04:38:36.262498    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:36.262510    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:36.262520    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:36.262523    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:36.262527    6416 node_conditions.go:105] duration metric: took 3.670893ms to run NodePressure ...
	I0422 04:38:36.262536    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:36.390392    6416 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0422 04:38:36.522457    6416 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0422 04:38:36.523385    6416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 04:38:36.523447    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0422 04:38:36.523452    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.523458    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.523462    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.525657    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:36.525666    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.525671    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.525674    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.525676    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.525678    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.525682    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.525684    6416 round_trippers.go:580]     Audit-Id: 5892a5a0-1ae9-40c2-a378-55172958401f
	I0422 04:38:36.526045    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 24485 chars]
	I0422 04:38:36.526622    6416 kubeadm.go:733] kubelet initialised
	I0422 04:38:36.526631    6416 kubeadm.go:734] duration metric: took 3.234861ms waiting for restarted kubelet to initialise ...
	I0422 04:38:36.526637    6416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:36.526667    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:36.526672    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.526677    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.526681    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.528657    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.528666    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.528675    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.528680    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.528684    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.528687    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.528690    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.528692    6416 round_trippers.go:580]     Audit-Id: dcc787ff-e71e-474e-886f-273858aeb216
	I0422 04:38:36.529573    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 81186 chars]
	I0422 04:38:36.531295    6416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.531340    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:36.531346    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.531351    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.531355    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.532672    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.532679    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.532684    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.532687    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.532690    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.532693    6416 round_trippers.go:580]     Audit-Id: f0bdc464-0eb7-4ece-980d-716fad8074ec
	I0422 04:38:36.532697    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.532700    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.533013    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:36.533251    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.533258    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.533264    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.533269    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.534392    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.534401    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.534406    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.534411    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.534415    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.534419    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.534423    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.534425    6416 round_trippers.go:580]     Audit-Id: 48a43717-6a7b-499b-b64a-9061d3621bc3
	I0422 04:38:36.534598    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.534772    6416 pod_ready.go:97] node "multinode-449000" hosting pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.534782    6416 pod_ready.go:81] duration metric: took 3.47759ms for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.534799    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.534808    6416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.534842    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:36.534848    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.534854    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.534858    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.536076    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.536086    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.536093    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.536097    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.536104    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.536107    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.536110    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.536113    6416 round_trippers.go:580]     Audit-Id: d31041b8-f593-47b2-a556-c5c256a0cb70
	I0422 04:38:36.536311    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:36.536514    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.536520    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.536526    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.536530    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.537802    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.537810    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.537816    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.537822    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.537825    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.537829    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.537831    6416 round_trippers.go:580]     Audit-Id: c8b485d8-8aea-4e68-bed8-86a42c565330
	I0422 04:38:36.537834    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.538017    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.538192    6416 pod_ready.go:97] node "multinode-449000" hosting pod "etcd-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.538201    6416 pod_ready.go:81] duration metric: took 3.387018ms for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.538207    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "etcd-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.538217    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.538244    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I0422 04:38:36.538249    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.538254    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.538259    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.539435    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.539444    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.539449    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.539477    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.539485    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.539488    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.539491    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.539494    6416 round_trippers.go:580]     Audit-Id: 807b0c92-21fb-452a-bbd1-56e50b42618c
	I0422 04:38:36.539653    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"cc0086bd-2049-4d09-a498-d26cc78b6968","resourceVersion":"1194","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.mirror":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.seen":"2024-04-22T11:29:12.576362787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0422 04:38:36.539885    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.539891    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.539897    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.539901    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.541011    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.541021    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.541042    6416 round_trippers.go:580]     Audit-Id: e41fc13b-3c0d-4dba-b812-6e69e5e48e6f
	I0422 04:38:36.541052    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.541056    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.541059    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.541065    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.541067    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.541167    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.541335    6416 pod_ready.go:97] node "multinode-449000" hosting pod "kube-apiserver-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.541348    6416 pod_ready.go:81] duration metric: took 3.126115ms for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.541353    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-apiserver-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.541361    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.541388    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I0422 04:38:36.541393    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.541398    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.541402    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.542638    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.542646    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.542651    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.542655    6416 round_trippers.go:580]     Audit-Id: 6eb70293-fb76-432d-af2a-fad537691f3b
	I0422 04:38:36.542660    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.542665    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.542668    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.542670    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.542827    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"7d730ce3-3f6c-4cc8-aff2-bbcf584056c7","resourceVersion":"1193","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.mirror":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.seen":"2024-04-22T11:29:12.576363612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0422 04:38:36.653009    6416 request.go:629] Waited for 109.901067ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.653074    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.653096    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.653122    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.653129    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.654328    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.654340    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.654347    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.654354    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.654359    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.654365    6416 round_trippers.go:580]     Audit-Id: 8b638333-4676-4716-9e39-c2a2c555a9a6
	I0422 04:38:36.654370    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.654374    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.654683    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.654874    6416 pod_ready.go:97] node "multinode-449000" hosting pod "kube-controller-manager-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.654884    6416 pod_ready.go:81] duration metric: took 113.51762ms for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.654891    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-controller-manager-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.654896    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.851611    6416 request.go:629] Waited for 196.667348ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q52c
	I0422 04:38:36.851699    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q52c
	I0422 04:38:36.851710    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.851722    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.851731    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.854315    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:36.854328    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.854335    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.854340    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.854344    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.854349    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:36.854353    6416 round_trippers.go:580]     Audit-Id: 3c9c16c7-f078-479e-b7ac-5ecc7f6f6364
	I0422 04:38:36.854357    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.854743    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4q52c","generateName":"kube-proxy-","namespace":"kube-system","uid":"764856b1-b523-4b58-8a33-6b81ab928c79","resourceVersion":"1162","creationTimestamp":"2024-04-22T11:32:35Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:37.052551    6416 request.go:629] Waited for 197.551006ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m03
	I0422 04:38:37.052728    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m03
	I0422 04:38:37.052740    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.052752    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.052758    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.055373    6416 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0422 04:38:37.055391    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.055399    6416 round_trippers.go:580]     Content-Length: 210
	I0422 04:38:37.055411    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.055417    6416 round_trippers.go:580]     Audit-Id: 05ed52ac-7276-41fd-901f-76455ea13c24
	I0422 04:38:37.055421    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.055425    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.055429    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.055434    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.055463    6416 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-449000-m03\" not found","reason":"NotFound","details":{"name":"multinode-449000-m03","kind":"nodes"},"code":404}
	I0422 04:38:37.055598    6416 pod_ready.go:97] node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:37.055617    6416 pod_ready.go:81] duration metric: took 400.713666ms for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:37.055627    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:37.055634    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:37.252016    6416 request.go:629] Waited for 196.330827ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jrtv2
	I0422 04:38:37.252079    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jrtv2
	I0422 04:38:37.252132    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.252143    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.252150    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.254673    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:37.254686    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.254694    6416 round_trippers.go:580]     Audit-Id: 116d4872-cccb-42be-98a3-b84be6adc79b
	I0422 04:38:37.254699    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.254704    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.254708    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.254713    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.254717    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.254889    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jrtv2","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6078b93-4180-484d-b486-9ddf193ba84e","resourceVersion":"1210","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0422 04:38:37.452641    6416 request.go:629] Waited for 197.411736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:37.452684    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:37.452691    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.452699    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.452704    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.455071    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:37.455084    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.455091    6416 round_trippers.go:580]     Audit-Id: f9ad74d9-00cf-4bf3-a98f-0acd5c5bc98e
	I0422 04:38:37.455096    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.455101    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.455106    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.455111    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.455114    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.455269    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:37.455517    6416 pod_ready.go:97] node "multinode-449000" hosting pod "kube-proxy-jrtv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:37.455531    6416 pod_ready.go:81] duration metric: took 399.887492ms for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:37.455540    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-proxy-jrtv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:37.455546    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:37.651867    6416 request.go:629] Waited for 196.268793ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:37.651926    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:37.651935    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.651977    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.651984    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.654765    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:37.654782    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.654789    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.654799    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.654825    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.654837    6416 round_trippers.go:580]     Audit-Id: 6351e0ca-778b-4663-a525-703f77101695
	I0422 04:38:37.654842    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.654847    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.654945    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lx9ft","generateName":"kube-proxy-","namespace":"kube-system","uid":"38104bb7-7d9e-4377-9912-06cb23591941","resourceVersion":"1031","creationTimestamp":"2024-04-22T11:31:54Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:31:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:37.852318    6416 request.go:629] Waited for 197.053333ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:37.852353    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:37.852373    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.852379    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.852384    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.853907    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:37.853920    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.853926    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.853931    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:37.853934    6416 round_trippers.go:580]     Audit-Id: 28ed923c-5450-4d7c-aaba-08ec83f366c0
	I0422 04:38:37.853937    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.853940    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.853943    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.854049    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000-m02","uid":"cf524355-0b8a-4495-8a18-e4d0f38226d6","resourceVersion":"1048","creationTimestamp":"2024-04-22T11:36:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_22T04_36_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:36:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0422 04:38:37.854222    6416 pod_ready.go:92] pod "kube-proxy-lx9ft" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:37.854231    6416 pod_ready.go:81] duration metric: took 398.676282ms for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:37.854238    6416 pod_ready.go:38] duration metric: took 1.327587035s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:37.854250    6416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 04:38:37.863339    6416 command_runner.go:130] > -16
	I0422 04:38:37.863586    6416 ops.go:34] apiserver oom_adj: -16
	I0422 04:38:37.863593    6416 kubeadm.go:591] duration metric: took 9.078711531s to restartPrimaryControlPlane
	I0422 04:38:37.863599    6416 kubeadm.go:393] duration metric: took 9.097428647s to StartCluster
	I0422 04:38:37.863608    6416 settings.go:142] acquiring lock: {Name:mk90f0ef82bf791c6c0ccd9a8a16931fa57323b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:37.863686    6416 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:37.864075    6416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18711-1033/kubeconfig: {Name:mkd60fed3a4688e81c1999ca37fdf35eadd19815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:37.864331    6416 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0422 04:38:37.887491    6416 out.go:177] * Verifying Kubernetes components...
	I0422 04:38:37.864345    6416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 04:38:37.864455    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:37.950305    6416 out.go:177] * Enabled addons: 
	I0422 04:38:37.929535    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:37.971256    6416 addons.go:505] duration metric: took 106.914277ms for enable addons: enabled=[]
	I0422 04:38:38.118912    6416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 04:38:38.131925    6416 node_ready.go:35] waiting up to 6m0s for node "multinode-449000" to be "Ready" ...
	I0422 04:38:38.131981    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:38.131986    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.131999    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.132001    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.133349    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:38.133364    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.133371    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.133378    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.133382    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.133385    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.133387    6416 round_trippers.go:580]     Audit-Id: 5ee4bdeb-d4a9-4eab-94b0-b9477257f16d
	I0422 04:38:38.133390    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.133498    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:38.133689    6416 node_ready.go:49] node "multinode-449000" has status "Ready":"True"
	I0422 04:38:38.133701    6416 node_ready.go:38] duration metric: took 1.757174ms for node "multinode-449000" to be "Ready" ...
	I0422 04:38:38.133707    6416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:38.251743    6416 request.go:629] Waited for 117.992667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:38.251873    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:38.251886    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.251897    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.251903    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.255540    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:38.255555    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.255561    6416 round_trippers.go:580]     Audit-Id: 755d3e42-dbf4-4c7e-8245-315286a2aa5a
	I0422 04:38:38.255564    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.255567    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.255571    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.255585    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.255591    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.256136    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1212"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 80593 chars]
	I0422 04:38:38.257861    6416 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:38.452657    6416 request.go:629] Waited for 194.724743ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.452778    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.452789    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.452800    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.452808    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.455531    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:38.455543    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.455550    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.455555    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.455559    6416 round_trippers.go:580]     Audit-Id: f0ff366b-ed5d-476a-89c4-ea17d980f532
	I0422 04:38:38.455562    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.455566    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.455570    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.455770    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:38.653655    6416 request.go:629] Waited for 197.490451ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:38.653742    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:38.653753    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.653767    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.653776    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.655850    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:38.655864    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.655871    6416 round_trippers.go:580]     Audit-Id: 6bf53d69-08ea-44ac-896f-d75eba5177d1
	I0422 04:38:38.655900    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.655908    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.655913    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.655918    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.655921    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.656123    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:38.851891    6416 request.go:629] Waited for 93.190651ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.852023    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.852034    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.852044    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.852053    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.854517    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:38.854530    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.854537    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.854543    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.854547    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.854552    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.854557    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:38.854560    6416 round_trippers.go:580]     Audit-Id: b98d8805-0338-4f64-bb8e-2799febe32bd
	I0422 04:38:38.854632    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:39.051674    6416 request.go:629] Waited for 196.682688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.051769    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.051778    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.051784    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.051791    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.053679    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:39.053690    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.053695    6416 round_trippers.go:580]     Audit-Id: 639ff229-1ef8-4d32-b611-7d58c2823fac
	I0422 04:38:39.053698    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.053701    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.053704    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.053706    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.053708    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.053809    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:39.258645    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:39.258672    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.258680    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.258685    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.261192    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:39.261212    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.261221    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.261227    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.261231    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.261235    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.261240    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.261245    6416 round_trippers.go:580]     Audit-Id: 765acc6a-86b3-4553-bf71-d8f337f95efb
	I0422 04:38:39.261530    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:39.451883    6416 request.go:629] Waited for 189.986329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.451924    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.451953    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.451959    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.451965    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.454648    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:39.454661    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.454667    6416 round_trippers.go:580]     Audit-Id: a6a1aacd-3eda-49fa-afb5-98bd022f1106
	I0422 04:38:39.454669    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.454672    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.454675    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.454677    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.454679    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.454760    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:39.758113    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:39.758135    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.758148    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.758155    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.760742    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:39.760753    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.760759    6416 round_trippers.go:580]     Audit-Id: 9862c951-baa4-44b7-99c5-a8f3a8360a7b
	I0422 04:38:39.760764    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.760768    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.760772    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.760775    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.760778    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.761100    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:39.852040    6416 request.go:629] Waited for 90.577038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.852106    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.852111    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.852116    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.852120    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.853899    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:39.853911    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.853917    6416 round_trippers.go:580]     Audit-Id: d0c77592-75e5-49cf-b86c-87b520b47e64
	I0422 04:38:39.853920    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.853923    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.853925    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.853928    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.853930    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:39.854020    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:40.259034    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:40.270218    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.270235    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.270242    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.272273    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:40.272289    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.272296    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.272302    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.272305    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.272309    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.272313    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.272317    6416 round_trippers.go:580]     Audit-Id: 590c77ec-399e-4542-a9f2-783f7614451b
	I0422 04:38:40.272448    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:40.272820    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:40.272830    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.272839    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.272844    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.273901    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:40.273909    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.273914    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.273918    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.273939    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.273953    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.273957    6416 round_trippers.go:580]     Audit-Id: 18568548-b7ce-4bd7-bcb8-e0021a01484e
	I0422 04:38:40.273961    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.274061    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:40.274226    6416 pod_ready.go:102] pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:40.758220    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:40.758240    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.758252    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.758258    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.760745    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:40.760755    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.760762    6416 round_trippers.go:580]     Audit-Id: 5079e81f-f3db-468f-a0bb-a30d08006d12
	I0422 04:38:40.760768    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.760773    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.760776    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.760791    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.760795    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.760851    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:40.761203    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:40.761213    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.761221    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.761227    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.762807    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:40.762816    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.762821    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.762824    6416 round_trippers.go:580]     Audit-Id: 29ccfd09-2837-4a1f-b8be-1d9ad18dad91
	I0422 04:38:40.762827    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.762829    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.762832    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.762835    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.762928    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:41.258379    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:41.258401    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.258414    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.258422    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.260958    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:41.260971    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.260978    6416 round_trippers.go:580]     Audit-Id: ddacdfed-30cd-44f2-a6c3-023c524e942c
	I0422 04:38:41.260982    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.260986    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.260990    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.260995    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.261001    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.261334    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:41.261715    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:41.261725    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.261733    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.261739    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.262917    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:41.262925    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.262930    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.262933    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.262936    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.262940    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.262944    6416 round_trippers.go:580]     Audit-Id: e29b3c7f-5920-4287-b8a1-e5b20ecd4f74
	I0422 04:38:41.262948    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.263092    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:41.759252    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:41.759279    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.759291    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.759298    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.761720    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:41.761734    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.761741    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.761747    6416 round_trippers.go:580]     Audit-Id: 598beba7-4166-4c7a-b232-70e75936f0b4
	I0422 04:38:41.761750    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.761754    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.761759    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.761764    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.762121    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:41.762485    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:41.762502    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.762510    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.762517    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.763864    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:41.763872    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.763880    6416 round_trippers.go:580]     Audit-Id: 9e38d6e0-ed46-4b2e-b08f-2c26d8fd6bd5
	I0422 04:38:41.763885    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.763890    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.763894    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.763898    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.763904    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.764129    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:42.260086    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:42.260102    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.260110    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.260113    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.262028    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:42.262041    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.262048    6416 round_trippers.go:580]     Audit-Id: 772f3cd6-ea53-487f-a66d-6693912928fc
	I0422 04:38:42.262056    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.262063    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.262066    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.262069    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.262073    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.262368    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:42.262647    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:42.262655    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.262660    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.262664    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.264022    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:42.264030    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.264034    6416 round_trippers.go:580]     Audit-Id: c752dbb7-57ef-4ae3-9a54-0e4ac43d1187
	I0422 04:38:42.264037    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.264042    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.264044    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.264046    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.264049    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.264102    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:42.758403    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:42.758419    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.758425    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.758429    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.761807    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:42.761822    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.761828    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.761831    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.761836    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.761839    6416 round_trippers.go:580]     Audit-Id: dc3e7d4c-9653-4275-8fab-43d05fc4384e
	I0422 04:38:42.761841    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.761844    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.761899    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:42.762194    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:42.762201    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.762206    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.762209    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.764557    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:42.764568    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.764572    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.764576    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.764578    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.764581    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.764584    6416 round_trippers.go:580]     Audit-Id: bf555109-b5da-43da-a16b-d5f37bfb7242
	I0422 04:38:42.764586    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.764647    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:42.764837    6416 pod_ready.go:102] pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:43.258020    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:43.258035    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.258042    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.258045    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.259661    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:43.259672    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.259677    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.259680    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.259682    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.259685    6416 round_trippers.go:580]     Audit-Id: 46a999b0-e90e-43ea-8ca0-faf384e56ad4
	I0422 04:38:43.259688    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.259689    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.259926    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:43.260233    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:43.260241    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.260247    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.260249    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.262390    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:43.262402    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.262409    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.262413    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.262417    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.262421    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.262423    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.262426    6416 round_trippers.go:580]     Audit-Id: f342952f-c3e6-4dfe-bcc5-cd1b10b7a535
	I0422 04:38:43.262621    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:43.758874    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:43.758902    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.758913    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.758921    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.761614    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:43.761632    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.761640    6416 round_trippers.go:580]     Audit-Id: 34c126a4-a5f7-44d8-bc52-867774f1460a
	I0422 04:38:43.761645    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.761649    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.761652    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.761679    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.761690    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.761860    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:43.762244    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:43.762255    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.762263    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.762266    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.763624    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:43.763631    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.763636    6416 round_trippers.go:580]     Audit-Id: af7aed87-b9d0-4a8d-ae3e-82eece9f0847
	I0422 04:38:43.763639    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.763642    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.763645    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.763648    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.763651    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.763895    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:44.258677    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:44.258702    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.258714    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.258720    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.261510    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:44.261527    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.261534    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.261538    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.261541    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.261545    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.261548    6416 round_trippers.go:580]     Audit-Id: 315f5465-c97c-4898-ac14-90127538a842
	I0422 04:38:44.261552    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.261635    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1290","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0422 04:38:44.262002    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:44.262012    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.262018    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.262022    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.263490    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.263500    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.263505    6416 round_trippers.go:580]     Audit-Id: 95440de8-4325-4764-aebf-d1aad22719d4
	I0422 04:38:44.263523    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.263530    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.263533    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.263535    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.263538    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.263632    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:44.263804    6416 pod_ready.go:92] pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:44.263813    6416 pod_ready.go:81] duration metric: took 6.005909926s for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:44.263819    6416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:44.263844    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:44.263849    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.263854    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.263857    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.265019    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.265027    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.265033    6416 round_trippers.go:580]     Audit-Id: 1dc5eb89-d434-4040-8fe9-a2472bcdeb29
	I0422 04:38:44.265036    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.265039    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.265043    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.265045    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.265049    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.265148    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:44.265366    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:44.265373    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.265383    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.265388    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.266503    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.266511    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.266517    6416 round_trippers.go:580]     Audit-Id: c0c8432f-04ce-407a-8ad5-55c2bc33b6b3
	I0422 04:38:44.266523    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.266529    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.266532    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.266536    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.266540    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.266708    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:44.764111    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:44.764138    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.764152    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.764158    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.766411    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:44.766420    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.766425    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.766428    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.766431    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.766434    6416 round_trippers.go:580]     Audit-Id: 45f79e37-2c5c-439f-98d1-a5341215bb6f
	I0422 04:38:44.766437    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.766440    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.766740    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:44.766987    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:44.766994    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.767000    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.767004    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.768050    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.768061    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.768071    6416 round_trippers.go:580]     Audit-Id: 99562bf7-89cc-4e0b-85a7-f43e8c3f42ed
	I0422 04:38:44.768075    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.768080    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.768082    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.768085    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.768088    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.768274    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:45.266095    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:45.272136    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.272174    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.272180    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.274741    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:45.274753    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.274760    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.274764    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.274768    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.274771    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.274774    6416 round_trippers.go:580]     Audit-Id: 4759a14a-bb8d-469f-bf38-86c3351c1bf2
	I0422 04:38:45.274777    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.275204    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:45.275526    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:45.275535    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.275543    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.275547    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.276897    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:45.276906    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.276911    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.276914    6416 round_trippers.go:580]     Audit-Id: b6980c9b-3e0e-4096-8a6b-8ae6c00ea8b1
	I0422 04:38:45.276917    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.276920    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.276923    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.276925    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.277007    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:45.766001    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:45.766052    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.766065    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.766073    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.769114    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:45.769134    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.769144    6416 round_trippers.go:580]     Audit-Id: 1530a2ba-ff19-463f-b50b-64b1174e18b0
	I0422 04:38:45.769155    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.769160    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.769166    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.769170    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.769174    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.769415    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:45.769660    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:45.769667    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.769672    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.769677    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.770995    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:45.771003    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.771008    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.771011    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.771015    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.771018    6416 round_trippers.go:580]     Audit-Id: ce8f0b43-9473-42d5-b65e-f6e290914c57
	I0422 04:38:45.771021    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.771023    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.771459    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:46.263959    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:46.263985    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.264013    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.264026    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.266366    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:46.266378    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.266385    6416 round_trippers.go:580]     Audit-Id: 911cd35e-4164-4039-8864-e21336dc297a
	I0422 04:38:46.266389    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.266393    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.266397    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.266400    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.266404    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.266561    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:46.266892    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:46.266901    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.266908    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.266914    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.268299    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:46.268312    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.268317    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.268321    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.268325    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.268328    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.268331    6416 round_trippers.go:580]     Audit-Id: 799fe622-6b95-4208-8f8c-b97ef24f4456
	I0422 04:38:46.268334    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.268455    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:46.268628    6416 pod_ready.go:102] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:46.763985    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:46.764024    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.764048    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.764053    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.765837    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:46.765847    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.765852    6416 round_trippers.go:580]     Audit-Id: c9d53c64-f445-4d8b-9792-713bdfd49228
	I0422 04:38:46.765856    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.765858    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.765887    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.765894    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.765897    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.766004    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:46.766242    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:46.766249    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.766254    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.766258    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.768012    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:46.768019    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.768023    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.768027    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.768029    6416 round_trippers.go:580]     Audit-Id: 0f422866-aa4c-4709-8b9a-f3c310fa0a14
	I0422 04:38:46.768032    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.768036    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.768040    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.768177    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:47.264105    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:47.264130    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.264141    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.264149    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.268211    6416 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 04:38:47.268223    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.268245    6416 round_trippers.go:580]     Audit-Id: ebe49c5c-07a8-4011-a6fe-5767063aa5b2
	I0422 04:38:47.268252    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.268256    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.268260    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.268263    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.268267    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.268341    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:47.268603    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:47.268610    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.268616    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.268620    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.270919    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:47.270928    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.270933    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.270936    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.270939    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.270941    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.270944    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.270947    6416 round_trippers.go:580]     Audit-Id: 853629e4-7a64-4d9d-8289-e1ede9c3c21d
	I0422 04:38:47.271046    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:47.764982    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:47.765001    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.765035    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.765041    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.767482    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:47.767493    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.767517    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.767526    6416 round_trippers.go:580]     Audit-Id: 75be00ec-45af-4003-a457-d0dbfbdb0fa0
	I0422 04:38:47.767529    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.767538    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.767542    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.767544    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.767712    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:47.768051    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:47.768058    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.768064    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.768067    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.769281    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:47.769289    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.769294    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.769298    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.769301    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.769303    6416 round_trippers.go:580]     Audit-Id: 04b9a1cf-b605-4cb6-be5f-fe419aa474ad
	I0422 04:38:47.769305    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.769308    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.769380    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:48.265231    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:48.265247    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.265252    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.265257    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.267289    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:48.267297    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.267302    6416 round_trippers.go:580]     Audit-Id: 9ce00ecc-8b0d-4902-8c44-c54b6f296c86
	I0422 04:38:48.267305    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.267308    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.267323    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.267329    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.267331    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.267487    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:48.267828    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:48.267835    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.267841    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.267845    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.269220    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:48.269231    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.269238    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.269244    6416 round_trippers.go:580]     Audit-Id: 3128a881-e0ef-4652-9b51-c8c7010317f0
	I0422 04:38:48.269252    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.269261    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.269265    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.269281    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.269431    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:48.269605    6416 pod_ready.go:102] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:48.765761    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:48.765786    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.765822    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.765831    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.768239    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:48.768252    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.768259    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.768265    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.768270    6416 round_trippers.go:580]     Audit-Id: d1fbe98a-cc34-4c5b-a7f8-aa5d9b8c8d38
	I0422 04:38:48.768273    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.768282    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.768289    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.768483    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:48.768804    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:48.768814    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.768821    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.768833    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.770079    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:48.770089    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.770097    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.770101    6416 round_trippers.go:580]     Audit-Id: b8a8eca9-846e-4ebc-81ca-890df5377df7
	I0422 04:38:48.770105    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.770121    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.770131    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.770149    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.770273    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:49.264852    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:49.264878    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.264889    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.264898    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.267357    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:49.267372    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.267382    6416 round_trippers.go:580]     Audit-Id: 6a10a56f-e90f-4752-b78a-198e4fbd3395
	I0422 04:38:49.267389    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.267393    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.267400    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.267405    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.267409    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.267682    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:49.268032    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:49.268042    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.268049    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.268054    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.269410    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:49.269418    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.269423    6416 round_trippers.go:580]     Audit-Id: c129c66a-52ad-4856-83ae-981d1fcb4394
	I0422 04:38:49.269426    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.269428    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.269431    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.269433    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.269436    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.269544    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:49.764744    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:49.764802    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.764816    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.764823    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.767638    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:49.767654    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.767661    6416 round_trippers.go:580]     Audit-Id: 3a9bc166-1062-4cc7-b46d-7a9d608607a6
	I0422 04:38:49.767666    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.767669    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.767694    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.767701    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.767706    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.768032    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:49.768379    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:49.768389    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.768397    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.768403    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.769897    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:49.769917    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.769927    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.769933    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.769939    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.769942    6416 round_trippers.go:580]     Audit-Id: e21ae3b6-2625-46d9-813c-8fe2a01c647a
	I0422 04:38:49.769945    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.769947    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.770250    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.264456    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:50.270447    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.270464    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.270471    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.273384    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:50.273400    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.273407    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.273411    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.273414    6416 round_trippers.go:580]     Audit-Id: b4536eb8-0f8d-4d66-a15e-a19d7e686a19
	I0422 04:38:50.273418    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.273422    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.273443    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.273573    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:50.273911    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.273921    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.273928    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.273932    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.275273    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.275282    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.275286    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.275289    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.275292    6416 round_trippers.go:580]     Audit-Id: 17928738-05f1-4b0b-b8d9-29acec3403fa
	I0422 04:38:50.275295    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.275299    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.275301    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.275368    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.275552    6416 pod_ready.go:102] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:50.765139    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:50.765154    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.765160    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.765163    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.766756    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.766770    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.766776    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.766779    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.766783    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.766786    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.766789    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.766792    6416 round_trippers.go:580]     Audit-Id: fbb2d05b-1b04-463c-89d8-0da3fdea8fd9
	I0422 04:38:50.767001    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1303","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0422 04:38:50.767295    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.767303    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.767309    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.767313    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.768512    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.768521    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.768526    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.768529    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.768532    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.768536    6416 round_trippers.go:580]     Audit-Id: 2a9bce2b-b2a3-4ef6-8be6-ecc6f0afb22b
	I0422 04:38:50.768539    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.768542    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.768624    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.768811    6416 pod_ready.go:92] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.768819    6416 pod_ready.go:81] duration metric: took 6.504960902s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.768829    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.768870    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I0422 04:38:50.768876    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.768881    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.768885    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.770033    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.770066    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.770073    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.770077    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.770082    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.770087    6416 round_trippers.go:580]     Audit-Id: 1ba6bc17-ac93-466e-b4c2-76c657606f1c
	I0422 04:38:50.770090    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.770095    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.770336    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"cc0086bd-2049-4d09-a498-d26cc78b6968","resourceVersion":"1279","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.mirror":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.seen":"2024-04-22T11:29:12.576362787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0422 04:38:50.770663    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.770669    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.770674    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.770679    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.772449    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.772459    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.772466    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.772480    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.772485    6416 round_trippers.go:580]     Audit-Id: c9fe64eb-5eb8-4273-b5a7-3e12fd8fa9c1
	I0422 04:38:50.772487    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.772490    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.772493    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.772578    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.772735    6416 pod_ready.go:92] pod "kube-apiserver-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.772743    6416 pod_ready.go:81] duration metric: took 3.907787ms for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.772748    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.772781    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I0422 04:38:50.772786    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.772791    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.772795    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.774160    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.774169    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.774175    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.774178    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.774180    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.774182    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.774186    6416 round_trippers.go:580]     Audit-Id: 8df13ed3-5f76-4a6d-9964-b92ff2b0ce04
	I0422 04:38:50.774189    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.774293    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"7d730ce3-3f6c-4cc8-aff2-bbcf584056c7","resourceVersion":"1281","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.mirror":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.seen":"2024-04-22T11:29:12.576363612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0422 04:38:50.774517    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.774524    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.774530    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.774534    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.775402    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:50.775408    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.775411    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.775421    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.775427    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.775433    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.775438    6416 round_trippers.go:580]     Audit-Id: a3118fc0-6324-4f73-a6cc-2197d9c958e5
	I0422 04:38:50.775443    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.775545    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.775707    6416 pod_ready.go:92] pod "kube-controller-manager-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.775714    6416 pod_ready.go:81] duration metric: took 2.960309ms for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.775719    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.775743    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q52c
	I0422 04:38:50.775747    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.775752    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.775756    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.776718    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:50.776724    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.776729    6416 round_trippers.go:580]     Audit-Id: 43b58cce-ba9f-4610-96fa-682e917b17e9
	I0422 04:38:50.776733    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.776739    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.776742    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.776746    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.776758    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.776882    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4q52c","generateName":"kube-proxy-","namespace":"kube-system","uid":"764856b1-b523-4b58-8a33-6b81ab928c79","resourceVersion":"1162","creationTimestamp":"2024-04-22T11:32:35Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:50.777094    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m03
	I0422 04:38:50.777101    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.777106    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.777109    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.778000    6416 round_trippers.go:574] Response Status: 404 Not Found in 0 milliseconds
	I0422 04:38:50.778007    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.778012    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.778028    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.778037    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.778041    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.778045    6416 round_trippers.go:580]     Content-Length: 210
	I0422 04:38:50.778051    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.778055    6416 round_trippers.go:580]     Audit-Id: 3c8e6efd-7787-488b-b4ed-39312495da3b
	I0422 04:38:50.778072    6416 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-449000-m03\" not found","reason":"NotFound","details":{"name":"multinode-449000-m03","kind":"nodes"},"code":404}
	I0422 04:38:50.778117    6416 pod_ready.go:97] node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:50.778125    6416 pod_ready.go:81] duration metric: took 2.400385ms for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:50.778130    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:50.778135    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.778169    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jrtv2
	I0422 04:38:50.778174    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.778179    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.778182    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.779084    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:50.779092    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.779099    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.779104    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.779109    6416 round_trippers.go:580]     Audit-Id: 8d2a0118-805c-4b91-bc4e-d9ca1837220e
	I0422 04:38:50.779115    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.779121    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.779132    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.779238    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jrtv2","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6078b93-4180-484d-b486-9ddf193ba84e","resourceVersion":"1210","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0422 04:38:50.779463    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.779470    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.779475    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.779479    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.780533    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.780538    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.780543    6416 round_trippers.go:580]     Audit-Id: 29ea273b-3d02-4f60-9358-61077e4e1c4c
	I0422 04:38:50.780545    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.780566    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.780570    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.780573    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.780576    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.780765    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.780925    6416 pod_ready.go:92] pod "kube-proxy-jrtv2" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.780932    6416 pod_ready.go:81] duration metric: took 2.791209ms for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.780937    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.966555    6416 request.go:629] Waited for 185.589322ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:50.966621    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:50.966626    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.966632    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.966636    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.968144    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.968153    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.968158    6416 round_trippers.go:580]     Audit-Id: 65f908c6-4282-4d43-a000-679bd0f86f8f
	I0422 04:38:50.968161    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.968164    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.968166    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.968181    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.968187    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:50.968350    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lx9ft","generateName":"kube-proxy-","namespace":"kube-system","uid":"38104bb7-7d9e-4377-9912-06cb23591941","resourceVersion":"1031","creationTimestamp":"2024-04-22T11:31:54Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:31:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:51.166553    6416 request.go:629] Waited for 197.931887ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:51.166609    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:51.166616    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.166628    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.166631    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.168178    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:51.168187    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.168192    6416 round_trippers.go:580]     Audit-Id: 83d73d71-3a36-41cb-96b8-87e83ab6c9fa
	I0422 04:38:51.168195    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.168198    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.168202    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.168205    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.168207    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.168266    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000-m02","uid":"cf524355-0b8a-4495-8a18-e4d0f38226d6","resourceVersion":"1048","creationTimestamp":"2024-04-22T11:36:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_22T04_36_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:36:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0422 04:38:51.168440    6416 pod_ready.go:92] pod "kube-proxy-lx9ft" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:51.168449    6416 pod_ready.go:81] duration metric: took 387.505107ms for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:51.168456    6416 pod_ready.go:38] duration metric: took 13.034672669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:51.168472    6416 api_server.go:52] waiting for apiserver process to appear ...
	I0422 04:38:51.168526    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:51.184969    6416 command_runner.go:130] > 1523
	I0422 04:38:51.185682    6416 api_server.go:72] duration metric: took 13.321263077s to wait for apiserver process to appear ...
	I0422 04:38:51.185694    6416 api_server.go:88] waiting for apiserver healthz status ...
	I0422 04:38:51.185708    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:51.190236    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0422 04:38:51.190268    6416 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0422 04:38:51.190272    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.190279    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.190284    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.190932    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:51.190941    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.190948    6416 round_trippers.go:580]     Content-Length: 263
	I0422 04:38:51.190952    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.190954    6416 round_trippers.go:580]     Audit-Id: 3295cd18-cbae-4fa3-95bd-2fbd1071fba3
	I0422 04:38:51.190957    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.190960    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.190962    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.190965    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.191004    6416 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0422 04:38:51.191031    6416 api_server.go:141] control plane version: v1.30.0
	I0422 04:38:51.191042    6416 api_server.go:131] duration metric: took 5.341125ms to wait for apiserver health ...
	I0422 04:38:51.191048    6416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 04:38:51.367153    6416 request.go:629] Waited for 176.072086ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.367207    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.367212    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.367218    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.367222    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.370493    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:51.370502    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.370507    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.370510    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.370513    6416 round_trippers.go:580]     Audit-Id: e6265c70-e04e-488a-b481-9e0d923b91a4
	I0422 04:38:51.370516    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.370521    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.370525    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.371752    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1290","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86242 chars]
	I0422 04:38:51.373654    6416 system_pods.go:59] 12 kube-system pods found
	I0422 04:38:51.373664    6416 system_pods.go:61] "coredns-7db6d8ff4d-tnr9d" [20633bf5-f995-44a1-b778-441b906496cd] Running
	I0422 04:38:51.373668    6416 system_pods.go:61] "etcd-multinode-449000" [ff3afd40-3400-4293-9fe4-03d22b8aba13] Running
	I0422 04:38:51.373671    6416 system_pods.go:61] "kindnet-jkzvq" [1c07681b-b4af-41b9-917c-01183dcd9e7f] Running
	I0422 04:38:51.373674    6416 system_pods.go:61] "kindnet-pbqsb" [f1537c83-ca18-43b9-8fc5-91de97ef1d76] Running
	I0422 04:38:51.373676    6416 system_pods.go:61] "kindnet-sm2l6" [9c708c64-7f5e-4502-9381-d97e024ea343] Running
	I0422 04:38:51.373679    6416 system_pods.go:61] "kube-apiserver-multinode-449000" [cc0086bd-2049-4d09-a498-d26cc78b6968] Running
	I0422 04:38:51.373683    6416 system_pods.go:61] "kube-controller-manager-multinode-449000" [7d730ce3-3f6c-4cc8-aff2-bbcf584056c7] Running
	I0422 04:38:51.373686    6416 system_pods.go:61] "kube-proxy-4q52c" [764856b1-b523-4b58-8a33-6b81ab928c79] Running
	I0422 04:38:51.373689    6416 system_pods.go:61] "kube-proxy-jrtv2" [e6078b93-4180-484d-b486-9ddf193ba84e] Running
	I0422 04:38:51.373692    6416 system_pods.go:61] "kube-proxy-lx9ft" [38104bb7-7d9e-4377-9912-06cb23591941] Running
	I0422 04:38:51.373696    6416 system_pods.go:61] "kube-scheduler-multinode-449000" [227c4576-009e-4a6c-8bc8-a3e9d9e62aae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 04:38:51.373700    6416 system_pods.go:61] "storage-provisioner" [f286f444-3ade-4e54-85bb-8577f0234cca] Running
	I0422 04:38:51.373716    6416 system_pods.go:74] duration metric: took 182.661024ms to wait for pod list to return data ...
	I0422 04:38:51.373724    6416 default_sa.go:34] waiting for default service account to be created ...
	I0422 04:38:51.567155    6416 request.go:629] Waited for 193.384955ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0422 04:38:51.567202    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0422 04:38:51.567207    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.567214    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.567218    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.573022    6416 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 04:38:51.573035    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.573050    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.573055    6416 round_trippers.go:580]     Audit-Id: cf6e9481-a0d3-4de5-b256-26c4c8e666f4
	I0422 04:38:51.573060    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.573064    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.573073    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.573076    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.573079    6416 round_trippers.go:580]     Content-Length: 262
	I0422 04:38:51.573090    6416 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"644e2bca-08d9-4fd2-bd78-af290bc8acca","resourceVersion":"355","creationTimestamp":"2024-04-22T11:29:27Z"}}]}
	I0422 04:38:51.573208    6416 default_sa.go:45] found service account: "default"
	I0422 04:38:51.573218    6416 default_sa.go:55] duration metric: took 199.488037ms for default service account to be created ...
	I0422 04:38:51.573226    6416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 04:38:51.767187    6416 request.go:629] Waited for 193.91906ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.767260    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.767270    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.767280    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.767291    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.771453    6416 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 04:38:51.771476    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.771483    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.771486    6416 round_trippers.go:580]     Audit-Id: e5e2291c-d4c6-4259-83a4-be723c83db8f
	I0422 04:38:51.771500    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.771503    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.771506    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.771520    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.772043    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1290","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86242 chars]
	I0422 04:38:51.773953    6416 system_pods.go:86] 12 kube-system pods found
	I0422 04:38:51.773965    6416 system_pods.go:89] "coredns-7db6d8ff4d-tnr9d" [20633bf5-f995-44a1-b778-441b906496cd] Running
	I0422 04:38:51.773969    6416 system_pods.go:89] "etcd-multinode-449000" [ff3afd40-3400-4293-9fe4-03d22b8aba13] Running
	I0422 04:38:51.773974    6416 system_pods.go:89] "kindnet-jkzvq" [1c07681b-b4af-41b9-917c-01183dcd9e7f] Running
	I0422 04:38:51.773977    6416 system_pods.go:89] "kindnet-pbqsb" [f1537c83-ca18-43b9-8fc5-91de97ef1d76] Running
	I0422 04:38:51.773980    6416 system_pods.go:89] "kindnet-sm2l6" [9c708c64-7f5e-4502-9381-d97e024ea343] Running
	I0422 04:38:51.773984    6416 system_pods.go:89] "kube-apiserver-multinode-449000" [cc0086bd-2049-4d09-a498-d26cc78b6968] Running
	I0422 04:38:51.773988    6416 system_pods.go:89] "kube-controller-manager-multinode-449000" [7d730ce3-3f6c-4cc8-aff2-bbcf584056c7] Running
	I0422 04:38:51.773991    6416 system_pods.go:89] "kube-proxy-4q52c" [764856b1-b523-4b58-8a33-6b81ab928c79] Running
	I0422 04:38:51.773994    6416 system_pods.go:89] "kube-proxy-jrtv2" [e6078b93-4180-484d-b486-9ddf193ba84e] Running
	I0422 04:38:51.773998    6416 system_pods.go:89] "kube-proxy-lx9ft" [38104bb7-7d9e-4377-9912-06cb23591941] Running
	I0422 04:38:51.774005    6416 system_pods.go:89] "kube-scheduler-multinode-449000" [227c4576-009e-4a6c-8bc8-a3e9d9e62aae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 04:38:51.774012    6416 system_pods.go:89] "storage-provisioner" [f286f444-3ade-4e54-85bb-8577f0234cca] Running
	I0422 04:38:51.774018    6416 system_pods.go:126] duration metric: took 200.786794ms to wait for k8s-apps to be running ...
	I0422 04:38:51.774026    6416 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 04:38:51.774081    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 04:38:51.786697    6416 system_svc.go:56] duration metric: took 12.665074ms WaitForService to wait for kubelet
	I0422 04:38:51.786712    6416 kubeadm.go:576] duration metric: took 13.922291142s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 04:38:51.786728    6416 node_conditions.go:102] verifying NodePressure condition ...
	I0422 04:38:51.967301    6416 request.go:629] Waited for 180.495069ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0422 04:38:51.967416    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0422 04:38:51.967429    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.967440    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.967446    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.969959    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:51.969974    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.969981    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.969986    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:52 GMT
	I0422 04:38:51.969990    6416 round_trippers.go:580]     Audit-Id: d75795fc-adb0-41c3-bca7-51415a4e6406
	I0422 04:38:51.970015    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.970025    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.970030    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.970317    6416 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0422 04:38:51.970734    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:51.970747    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:51.970754    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:51.970758    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:51.970763    6416 node_conditions.go:105] duration metric: took 184.029883ms to run NodePressure ...
	I0422 04:38:51.970774    6416 start.go:240] waiting for startup goroutines ...
	I0422 04:38:51.970787    6416 start.go:245] waiting for cluster config update ...
	I0422 04:38:51.970796    6416 start.go:254] writing updated cluster config ...
	I0422 04:38:51.994473    6416 out.go:177] 
	I0422 04:38:52.014790    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:52.014945    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:52.037450    6416 out.go:177] * Starting "multinode-449000-m02" worker node in "multinode-449000" cluster
	I0422 04:38:52.080255    6416 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 04:38:52.080295    6416 cache.go:56] Caching tarball of preloaded images
	I0422 04:38:52.080475    6416 preload.go:173] Found /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0422 04:38:52.080495    6416 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 04:38:52.080626    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:52.081591    6416 start.go:360] acquireMachinesLock for multinode-449000-m02: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 04:38:52.081700    6416 start.go:364] duration metric: took 82.942µs to acquireMachinesLock for "multinode-449000-m02"
	I0422 04:38:52.081726    6416 start.go:96] Skipping create...Using existing machine configuration
	I0422 04:38:52.081733    6416 fix.go:54] fixHost starting: m02
	I0422 04:38:52.082198    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:52.082217    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:52.091648    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52234
	I0422 04:38:52.092000    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:52.092340    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:52.092358    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:52.092554    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:52.092650    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:38:52.092744    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I0422 04:38:52.092825    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:52.092888    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 6310
	I0422 04:38:52.093843    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid 6310 missing from process table
	I0422 04:38:52.093863    6416 fix.go:112] recreateIfNeeded on multinode-449000-m02: state=Stopped err=<nil>
	I0422 04:38:52.093874    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	W0422 04:38:52.093958    6416 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 04:38:52.117231    6416 out.go:177] * Restarting existing hyperkit VM for "multinode-449000-m02" ...
	I0422 04:38:52.158172    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .Start
	I0422 04:38:52.158386    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:52.158410    6416 main.go:141] libmachine: (multinode-449000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid
	I0422 04:38:52.159725    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid 6310 missing from process table
	I0422 04:38:52.159746    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | pid 6310 is in state "Stopped"
	I0422 04:38:52.159764    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid...
	I0422 04:38:52.160132    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Using UUID 6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0
	I0422 04:38:52.186324    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Generated MAC e2:d0:5:63:30:40
	I0422 04:38:52.186345    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000
	I0422 04:38:52.186507    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c3200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0422 04:38:52.186538    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c3200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0422 04:38:52.186610    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/multinode-449000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage,/Users/j
enkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"}
	I0422 04:38:52.186653    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/multinode-449000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/mult
inode-449000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"
	I0422 04:38:52.186674    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0422 04:38:52.188024    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Pid is 6455
	I0422 04:38:52.188486    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 0
	I0422 04:38:52.188514    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:52.188579    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 6455
	I0422 04:38:52.190238    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for e2:d0:5:63:30:40 in /var/db/dhcpd_leases ...
	I0422 04:38:52.190306    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0422 04:38:52.190332    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279dab}
	I0422 04:38:52.190354    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 04:38:52.190368    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279d43}
	I0422 04:38:52.190382    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Found match: e2:d0:5:63:30:40
	I0422 04:38:52.190396    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | IP: 192.169.0.17
	I0422 04:38:52.190433    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetConfigRaw
	I0422 04:38:52.191085    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I0422 04:38:52.191263    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:52.191782    6416 machine.go:94] provisionDockerMachine start ...
	I0422 04:38:52.191793    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:38:52.191941    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:38:52.192043    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:38:52.192142    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:38:52.192235    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:38:52.192333    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:38:52.192465    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:52.192647    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:38:52.192656    6416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 04:38:52.195735    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0422 04:38:52.204103    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0422 04:38:52.205110    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:52.205126    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:52.205136    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:52.205147    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:52.585184    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0422 04:38:52.585203    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0422 04:38:52.699814    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:52.699834    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:52.699864    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:52.699884    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:52.700761    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0422 04:38:52.700781    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0422 04:38:57.992005    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0422 04:38:57.992071    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0422 04:38:57.992086    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0422 04:38:58.016646    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:58 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0422 04:39:27.258967    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 04:39:27.258982    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I0422 04:39:27.259114    6416 buildroot.go:166] provisioning hostname "multinode-449000-m02"
	I0422 04:39:27.259125    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I0422 04:39:27.259217    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.259312    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.259405    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.259487    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.259577    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.259704    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.259866    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.259875    6416 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000-m02 && echo "multinode-449000-m02" | sudo tee /etc/hostname
	I0422 04:39:27.331893    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000-m02
	
	I0422 04:39:27.331913    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.332049    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.332142    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.332243    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.332354    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.332492    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.332640    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.332651    6416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 04:39:27.400238    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 04:39:27.400266    6416 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18711-1033/.minikube CaCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18711-1033/.minikube}
	I0422 04:39:27.400277    6416 buildroot.go:174] setting up certificates
	I0422 04:39:27.400284    6416 provision.go:84] configureAuth start
	I0422 04:39:27.400291    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I0422 04:39:27.400426    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I0422 04:39:27.400516    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.400607    6416 provision.go:143] copyHostCerts
	I0422 04:39:27.400634    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:39:27.400695    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem, removing ...
	I0422 04:39:27.400701    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:39:27.400845    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem (1082 bytes)
	I0422 04:39:27.401042    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:39:27.401082    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem, removing ...
	I0422 04:39:27.401088    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:39:27.401177    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem (1123 bytes)
	I0422 04:39:27.401337    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:39:27.401378    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem, removing ...
	I0422 04:39:27.401383    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:39:27.401458    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem (1675 bytes)
	I0422 04:39:27.401605    6416 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem org=jenkins.multinode-449000-m02 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-449000-m02]
	I0422 04:39:27.550203    6416 provision.go:177] copyRemoteCerts
	I0422 04:39:27.550254    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 04:39:27.550268    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.550408    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.550500    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.550577    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.550655    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:27.590164    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 04:39:27.590247    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0422 04:39:27.609334    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 04:39:27.609408    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0422 04:39:27.628163    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 04:39:27.628229    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 04:39:27.647070    6416 provision.go:87] duration metric: took 246.777365ms to configureAuth
	I0422 04:39:27.647083    6416 buildroot.go:189] setting minikube options for container-runtime
	I0422 04:39:27.647258    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:39:27.647276    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:27.647405    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.647487    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.647568    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.647634    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.647722    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.647831    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.647951    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.647958    6416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 04:39:27.711230    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0422 04:39:27.711244    6416 buildroot.go:70] root file system type: tmpfs
	I0422 04:39:27.711329    6416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 04:39:27.711348    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.711481    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.711569    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.711657    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.711760    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.711905    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.712045    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.712090    6416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 04:39:27.784685    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 04:39:27.784709    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.784846    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.784942    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.785023    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.785119    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.785252    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.785395    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.785413    6416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 04:39:29.324027    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0422 04:39:29.324042    6416 machine.go:97] duration metric: took 37.132053891s to provisionDockerMachine
	I0422 04:39:29.324050    6416 start.go:293] postStartSetup for "multinode-449000-m02" (driver="hyperkit")
	I0422 04:39:29.324061    6416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 04:39:29.324071    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.324246    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 04:39:29.324268    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:29.324354    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.324449    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.324543    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.324621    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:29.362161    6416 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 04:39:29.365050    6416 command_runner.go:130] > NAME=Buildroot
	I0422 04:39:29.365059    6416 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 04:39:29.365063    6416 command_runner.go:130] > ID=buildroot
	I0422 04:39:29.365083    6416 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 04:39:29.365091    6416 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 04:39:29.365170    6416 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 04:39:29.365179    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/addons for local assets ...
	I0422 04:39:29.365281    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/files for local assets ...
	I0422 04:39:29.365469    6416 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> 14842.pem in /etc/ssl/certs
	I0422 04:39:29.365475    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> /etc/ssl/certs/14842.pem
	I0422 04:39:29.365676    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 04:39:29.373469    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /etc/ssl/certs/14842.pem (1708 bytes)
	I0422 04:39:29.392405    6416 start.go:296] duration metric: took 68.34327ms for postStartSetup
	I0422 04:39:29.392424    6416 fix.go:56] duration metric: took 37.310491855s for fixHost
	I0422 04:39:29.392439    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:29.392575    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.392660    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.392755    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.392849    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.392958    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:29.393097    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:29.393104    6416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 04:39:29.454814    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713785969.627304722
	
	I0422 04:39:29.454826    6416 fix.go:216] guest clock: 1713785969.627304722
	I0422 04:39:29.454831    6416 fix.go:229] Guest: 2024-04-22 04:39:29.627304722 -0700 PDT Remote: 2024-04-22 04:39:29.39243 -0700 PDT m=+79.186243193 (delta=234.874722ms)
	I0422 04:39:29.454843    6416 fix.go:200] guest clock delta is within tolerance: 234.874722ms
	I0422 04:39:29.454848    6416 start.go:83] releasing machines lock for "multinode-449000-m02", held for 37.372937032s
	I0422 04:39:29.454865    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.454999    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I0422 04:39:29.478473    6416 out.go:177] * Found network options:
	I0422 04:39:29.499392    6416 out.go:177]   - NO_PROXY=192.169.0.16
	W0422 04:39:29.520295    6416 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 04:39:29.520322    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.520866    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.520998    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.521071    6416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 04:39:29.521104    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	W0422 04:39:29.521164    6416 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 04:39:29.521227    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.521238    6416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 04:39:29.521268    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:29.521394    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.521416    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.521525    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.521569    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.521689    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:29.521707    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.521838    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:29.556257    6416 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 04:39:29.556409    6416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 04:39:29.556470    6416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 04:39:29.604422    6416 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 04:39:29.604897    6416 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0422 04:39:29.604914    6416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 04:39:29.604921    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:39:29.604992    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:39:29.620264    6416 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0422 04:39:29.620481    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0422 04:39:29.629616    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 04:39:29.638708    6416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 04:39:29.638752    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 04:39:29.647676    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:39:29.656675    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 04:39:29.665598    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:39:29.674573    6416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 04:39:29.683829    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 04:39:29.692872    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 04:39:29.702132    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 04:39:29.711303    6416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 04:39:29.719749    6416 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 04:39:29.719901    6416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 04:39:29.728145    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:39:29.834420    6416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 04:39:29.852642    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:39:29.852725    6416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 04:39:29.870613    6416 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0422 04:39:29.871052    6416 command_runner.go:130] > [Unit]
	I0422 04:39:29.871060    6416 command_runner.go:130] > Description=Docker Application Container Engine
	I0422 04:39:29.871064    6416 command_runner.go:130] > Documentation=https://docs.docker.com
	I0422 04:39:29.871070    6416 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0422 04:39:29.871074    6416 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0422 04:39:29.871082    6416 command_runner.go:130] > StartLimitBurst=3
	I0422 04:39:29.871086    6416 command_runner.go:130] > StartLimitIntervalSec=60
	I0422 04:39:29.871090    6416 command_runner.go:130] > [Service]
	I0422 04:39:29.871093    6416 command_runner.go:130] > Type=notify
	I0422 04:39:29.871096    6416 command_runner.go:130] > Restart=on-failure
	I0422 04:39:29.871101    6416 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0422 04:39:29.871106    6416 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0422 04:39:29.871116    6416 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0422 04:39:29.871122    6416 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0422 04:39:29.871128    6416 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0422 04:39:29.871133    6416 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0422 04:39:29.871138    6416 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0422 04:39:29.871144    6416 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0422 04:39:29.871157    6416 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0422 04:39:29.871171    6416 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0422 04:39:29.871175    6416 command_runner.go:130] > ExecStart=
	I0422 04:39:29.871203    6416 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0422 04:39:29.871213    6416 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0422 04:39:29.871221    6416 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0422 04:39:29.871226    6416 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0422 04:39:29.871231    6416 command_runner.go:130] > LimitNOFILE=infinity
	I0422 04:39:29.871237    6416 command_runner.go:130] > LimitNPROC=infinity
	I0422 04:39:29.871241    6416 command_runner.go:130] > LimitCORE=infinity
	I0422 04:39:29.871245    6416 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0422 04:39:29.871250    6416 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0422 04:39:29.871254    6416 command_runner.go:130] > TasksMax=infinity
	I0422 04:39:29.871261    6416 command_runner.go:130] > TimeoutStartSec=0
	I0422 04:39:29.871269    6416 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0422 04:39:29.871272    6416 command_runner.go:130] > Delegate=yes
	I0422 04:39:29.871278    6416 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0422 04:39:29.871308    6416 command_runner.go:130] > KillMode=process
	I0422 04:39:29.871312    6416 command_runner.go:130] > [Install]
	I0422 04:39:29.871316    6416 command_runner.go:130] > WantedBy=multi-user.target
	I0422 04:39:29.871416    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:39:29.884933    6416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 04:39:29.904209    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:39:29.915630    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:39:29.926586    6416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0422 04:39:29.946970    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:39:29.957645    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:39:29.972878    6416 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0422 04:39:29.973104    6416 ssh_runner.go:195] Run: which cri-dockerd
	I0422 04:39:29.975896    6416 command_runner.go:130] > /usr/bin/cri-dockerd
	I0422 04:39:29.976067    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 04:39:29.983521    6416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 04:39:29.997905    6416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 04:39:30.098128    6416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 04:39:30.199672    6416 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 04:39:30.199698    6416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 04:39:30.215471    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:39:30.324911    6416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 04:40:31.458397    6416 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0422 04:40:31.458419    6416 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0422 04:40:31.458485    6416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.048186237s)
	I0422 04:40:31.458550    6416 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0422 04:40:31.468468    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0422 04:40:31.468481    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500273741Z" level=info msg="Starting up"
	I0422 04:40:31.468494    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500896562Z" level=info msg="containerd not running, starting managed containerd"
	I0422 04:40:31.468509    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.501458070Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	I0422 04:40:31.468520    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.519154130Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0422 04:40:31.468531    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536175934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0422 04:40:31.468542    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536200901Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0422 04:40:31.468552    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536237889Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0422 04:40:31.468561    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536248409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468572    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536401321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468581    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536443904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468600    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536555068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468609    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536590399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468618    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536602655Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0422 04:40:31.468628    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536609559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468638    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536757403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468647    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536982056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468661    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538601388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468670    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538639201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468762    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538724354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468784    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538735079Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0422 04:40:31.468798    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538857030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0422 04:40:31.468809    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538906380Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0422 04:40:31.468816    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538916250Z" level=info msg="metadata content store policy set" policy=shared
	I0422 04:40:31.468825    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540934544Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0422 04:40:31.468836    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540980765Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0422 04:40:31.468845    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540995031Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0422 04:40:31.468854    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541005291Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0422 04:40:31.468863    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541017645Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0422 04:40:31.468872    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541059879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0422 04:40:31.468883    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541226925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0422 04:40:31.468892    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541376031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0422 04:40:31.468901    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541411674Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0422 04:40:31.468910    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541423221Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0422 04:40:31.468920    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541432259Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468930    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541440555Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468939    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541448433Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468948    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541457401Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468958    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541466668Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468968    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541474780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.469077    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541483321Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.469088    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541490681Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.469097    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541503918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469105    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541513941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469114    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541522110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469123    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541530364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469131    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541538164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469140    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541546259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469149    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541553607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469158    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541562316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469167    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541570467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469177    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541582908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469186    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541590762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469194    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541598307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469203    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541606034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469212    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541617175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0422 04:40:31.469220    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541630384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469235    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541639723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469244    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541646814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0422 04:40:31.469254    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541690816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0422 04:40:31.469265    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541704905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0422 04:40:31.469401    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541735544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0422 04:40:31.469415    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541746288Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0422 04:40:31.469424    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541956055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469437    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541992919Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0422 04:40:31.469444    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542053080Z" level=info msg="NRI interface is disabled by configuration."
	I0422 04:40:31.469453    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542265818Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0422 04:40:31.469462    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542368204Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0422 04:40:31.469469    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542421668Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0422 04:40:31.469477    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542433824Z" level=info msg="containerd successfully booted in 0.024134s"
	I0422 04:40:31.469484    6416 command_runner.go:130] > Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.521245248Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0422 04:40:31.469492    6416 command_runner.go:130] > Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.536466420Z" level=info msg="Loading containers: start."
	I0422 04:40:31.469503    6416 command_runner.go:130] > Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.670082730Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0422 04:40:31.469510    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.470397892Z" level=info msg="Loading containers: done."
	I0422 04:40:31.469520    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.476831522Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0422 04:40:31.469528    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.477000847Z" level=info msg="Daemon has completed initialization"
	I0422 04:40:31.469536    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495177168Z" level=info msg="API listen on /var/run/docker.sock"
	I0422 04:40:31.469543    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495332686Z" level=info msg="API listen on [::]:2376"
	I0422 04:40:31.469549    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 systemd[1]: Started Docker Application Container Engine.
	I0422 04:40:31.469554    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0422 04:40:31.469561    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.509057098Z" level=info msg="Processing signal 'terminated'"
	I0422 04:40:31.469571    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510124902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0422 04:40:31.469580    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510320720Z" level=info msg="Daemon shutdown complete"
	I0422 04:40:31.469591    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510348907Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0422 04:40:31.469600    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510352277Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0422 04:40:31.469606    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0422 04:40:31.469612    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0422 04:40:31.469647    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0422 04:40:31.469655    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 dockerd[806]: time="2024-04-22T11:39:31.552429015Z" level=info msg="Starting up"
	I0422 04:40:31.469664    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 dockerd[806]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0422 04:40:31.469673    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0422 04:40:31.469680    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0422 04:40:31.469686    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0422 04:40:31.494051    6416 out.go:177] 
	W0422 04:40:31.514947    6416 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 22 11:39:27 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500273741Z" level=info msg="Starting up"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500896562Z" level=info msg="containerd not running, starting managed containerd"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.501458070Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.519154130Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536175934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536200901Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536237889Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536248409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536401321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536443904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536555068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536590399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536602655Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536609559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536757403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536982056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538601388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538639201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538724354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538735079Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538857030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538906380Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538916250Z" level=info msg="metadata content store policy set" policy=shared
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540934544Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540980765Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540995031Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541005291Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541017645Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541059879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541226925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541376031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541411674Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541423221Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541432259Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541440555Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541448433Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541457401Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541466668Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541474780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541483321Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541490681Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541503918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541513941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541522110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541530364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541538164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541546259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541553607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541562316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541570467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541582908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541590762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541598307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541606034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541617175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541630384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541639723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541646814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541690816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541704905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541735544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541746288Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541956055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541992919Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542053080Z" level=info msg="NRI interface is disabled by configuration."
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542265818Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542368204Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542421668Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542433824Z" level=info msg="containerd successfully booted in 0.024134s"
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.521245248Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.536466420Z" level=info msg="Loading containers: start."
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.670082730Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.470397892Z" level=info msg="Loading containers: done."
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.476831522Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.477000847Z" level=info msg="Daemon has completed initialization"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495177168Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495332686Z" level=info msg="API listen on [::]:2376"
	Apr 22 11:39:29 multinode-449000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 22 11:39:30 multinode-449000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.509057098Z" level=info msg="Processing signal 'terminated'"
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510124902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510320720Z" level=info msg="Daemon shutdown complete"
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510348907Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510352277Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:39:31 multinode-449000-m02 dockerd[806]: time="2024-04-22T11:39:31.552429015Z" level=info msg="Starting up"
	Apr 22 11:40:31 multinode-449000-m02 dockerd[806]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 22 11:39:27 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500273741Z" level=info msg="Starting up"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500896562Z" level=info msg="containerd not running, starting managed containerd"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.501458070Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.519154130Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536175934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536200901Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536237889Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536248409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536401321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536443904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536555068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536590399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536602655Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536609559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536757403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536982056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538601388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538639201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538724354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538735079Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538857030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538906380Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538916250Z" level=info msg="metadata content store policy set" policy=shared
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540934544Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540980765Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540995031Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541005291Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541017645Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541059879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541226925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541376031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541411674Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541423221Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541432259Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541440555Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541448433Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541457401Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541466668Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541474780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541483321Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541490681Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541503918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541513941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541522110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541530364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541538164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541546259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541553607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541562316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541570467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541582908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541590762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541598307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541606034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541617175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541630384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541639723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541646814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541690816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541704905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541735544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541746288Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541956055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541992919Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542053080Z" level=info msg="NRI interface is disabled by configuration."
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542265818Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542368204Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542421668Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542433824Z" level=info msg="containerd successfully booted in 0.024134s"
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.521245248Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.536466420Z" level=info msg="Loading containers: start."
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.670082730Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.470397892Z" level=info msg="Loading containers: done."
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.476831522Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.477000847Z" level=info msg="Daemon has completed initialization"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495177168Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495332686Z" level=info msg="API listen on [::]:2376"
	Apr 22 11:39:29 multinode-449000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 22 11:39:30 multinode-449000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.509057098Z" level=info msg="Processing signal 'terminated'"
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510124902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510320720Z" level=info msg="Daemon shutdown complete"
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510348907Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510352277Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:39:31 multinode-449000-m02 dockerd[806]: time="2024-04-22T11:39:31.552429015Z" level=info msg="Starting up"
	Apr 22 11:40:31 multinode-449000-m02 dockerd[806]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0422 04:40:31.515066    6416 out.go:239] * 
	* 
	W0422 04:40:31.516170    6416 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 04:40:31.600069    6416 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-449000 -n multinode-449000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 logs -n 25: (2.757640314s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-449000 cp multinode-449000-m02:/home/docker/cp-test.txt                                                         | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000:/home/docker/cp-test_multinode-449000-m02_multinode-449000.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n                                                                                                   | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m02 sudo cat                                                                                             |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n multinode-449000 sudo cat                                                                         | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | /home/docker/cp-test_multinode-449000-m02_multinode-449000.txt                                                            |                  |         |         |                     |                     |
	| cp      | multinode-449000 cp multinode-449000-m02:/home/docker/cp-test.txt                                                         | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m03:/home/docker/cp-test_multinode-449000-m02_multinode-449000-m03.txt                                   |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n                                                                                                   | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m02 sudo cat                                                                                             |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n multinode-449000-m03 sudo cat                                                                     | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | /home/docker/cp-test_multinode-449000-m02_multinode-449000-m03.txt                                                        |                  |         |         |                     |                     |
	| cp      | multinode-449000 cp testdata/cp-test.txt                                                                                  | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m03:/home/docker/cp-test.txt                                                                             |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n                                                                                                   | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m03 sudo cat                                                                                             |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |                  |         |         |                     |                     |
	| cp      | multinode-449000 cp multinode-449000-m03:/home/docker/cp-test.txt                                                         | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile25091067/001/cp-test_multinode-449000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n                                                                                                   | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m03 sudo cat                                                                                             |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |                  |         |         |                     |                     |
	| cp      | multinode-449000 cp multinode-449000-m03:/home/docker/cp-test.txt                                                         | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000:/home/docker/cp-test_multinode-449000-m03_multinode-449000.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n                                                                                                   | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m03 sudo cat                                                                                             |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n multinode-449000 sudo cat                                                                         | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | /home/docker/cp-test_multinode-449000-m03_multinode-449000.txt                                                            |                  |         |         |                     |                     |
	| cp      | multinode-449000 cp multinode-449000-m03:/home/docker/cp-test.txt                                                         | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m02:/home/docker/cp-test_multinode-449000-m03_multinode-449000-m02.txt                                   |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n                                                                                                   | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | multinode-449000-m03 sudo cat                                                                                             |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |                  |         |         |                     |                     |
	| ssh     | multinode-449000 ssh -n multinode-449000-m02 sudo cat                                                                     | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	|         | /home/docker/cp-test_multinode-449000-m03_multinode-449000-m02.txt                                                        |                  |         |         |                     |                     |
	| node    | multinode-449000 node stop m03                                                                                            | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:32 PDT |
	| node    | multinode-449000 node start                                                                                               | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:32 PDT | 22 Apr 24 04:33 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                |                  |         |         |                     |                     |
	| node    | list -p multinode-449000                                                                                                  | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:33 PDT |                     |
	| stop    | -p multinode-449000                                                                                                       | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:33 PDT | 22 Apr 24 04:33 PDT |
	| start   | -p multinode-449000                                                                                                       | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:33 PDT | 22 Apr 24 04:37 PDT |
	|         | --wait=true -v=8                                                                                                          |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                         |                  |         |         |                     |                     |
	| node    | list -p multinode-449000                                                                                                  | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:37 PDT |                     |
	| node    | multinode-449000 node delete                                                                                              | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:37 PDT | 22 Apr 24 04:37 PDT |
	|         | m03                                                                                                                       |                  |         |         |                     |                     |
	| stop    | multinode-449000 stop                                                                                                     | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:37 PDT | 22 Apr 24 04:38 PDT |
	| start   | -p multinode-449000                                                                                                       | multinode-449000 | jenkins | v1.33.0 | 22 Apr 24 04:38 PDT |                     |
	|         | --wait=true -v=8                                                                                                          |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                         |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                         |                  |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 04:38:10
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 04:38:10.248163    6416 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:38:10.248364    6416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:38:10.248370    6416 out.go:304] Setting ErrFile to fd 2...
	I0422 04:38:10.248373    6416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:38:10.248551    6416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:38:10.249993    6416 out.go:298] Setting JSON to false
	I0422 04:38:10.272166    6416 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4060,"bootTime":1713781830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 04:38:10.272260    6416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 04:38:10.294339    6416 out.go:177] * [multinode-449000] minikube v1.33.0 on Darwin 14.4.1
	I0422 04:38:10.337130    6416 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 04:38:10.337190    6416 notify.go:220] Checking for updates...
	I0422 04:38:10.359049    6416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:10.379944    6416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 04:38:10.422063    6416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 04:38:10.442840    6416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	I0422 04:38:10.463898    6416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 04:38:10.485993    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:10.486650    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.486738    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.496755    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52202
	I0422 04:38:10.497088    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.497505    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.497514    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.497724    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.497841    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.498035    6416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 04:38:10.498265    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.498287    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.506538    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52204
	I0422 04:38:10.506852    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.507183    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.507192    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.507441    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.507612    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.536047    6416 out.go:177] * Using the hyperkit driver based on existing profile
	I0422 04:38:10.557148    6416 start.go:297] selected driver: hyperkit
	I0422 04:38:10.557176    6416 start.go:901] validating driver "hyperkit" against &{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:multinode-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0422 04:38:10.557446    6416 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 04:38:10.557634    6416 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 04:38:10.557843    6416 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18711-1033/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0422 04:38:10.567230    6416 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0422 04:38:10.571069    6416 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.571104    6416 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0422 04:38:10.573692    6416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 04:38:10.573749    6416 cni.go:84] Creating CNI manager for ""
	I0422 04:38:10.573757    6416 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0422 04:38:10.573831    6416 start.go:340] cluster config:
	{Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-449000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 04:38:10.573922    6416 iso.go:125] acquiring lock: {Name:mk174d786084574fba345b763762a2b8adb514c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 04:38:10.616012    6416 out.go:177] * Starting "multinode-449000" primary control-plane node in "multinode-449000" cluster
	I0422 04:38:10.637091    6416 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 04:38:10.637190    6416 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0422 04:38:10.637216    6416 cache.go:56] Caching tarball of preloaded images
	I0422 04:38:10.637410    6416 preload.go:173] Found /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0422 04:38:10.637428    6416 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 04:38:10.637608    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:10.638476    6416 start.go:360] acquireMachinesLock for multinode-449000: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 04:38:10.638592    6416 start.go:364] duration metric: took 92.843µs to acquireMachinesLock for "multinode-449000"
	I0422 04:38:10.638625    6416 start.go:96] Skipping create...Using existing machine configuration
	I0422 04:38:10.638642    6416 fix.go:54] fixHost starting: 
	I0422 04:38:10.639054    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.639115    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.648338    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52206
	I0422 04:38:10.648728    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.649122    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.649138    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.649380    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.649549    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.649663    6416 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I0422 04:38:10.649749    6416 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.649830    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 6245
	I0422 04:38:10.650803    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 6245 missing from process table
	I0422 04:38:10.650860    6416 fix.go:112] recreateIfNeeded on multinode-449000: state=Stopped err=<nil>
	I0422 04:38:10.650884    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	W0422 04:38:10.650971    6416 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 04:38:10.692813    6416 out.go:177] * Restarting existing hyperkit VM for "multinode-449000" ...
	I0422 04:38:10.715060    6416 main.go:141] libmachine: (multinode-449000) Calling .Start
	I0422 04:38:10.715338    6416 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.715396    6416 main.go:141] libmachine: (multinode-449000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid
	I0422 04:38:10.717236    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 6245 missing from process table
	I0422 04:38:10.717259    6416 main.go:141] libmachine: (multinode-449000) DBG | pid 6245 is in state "Stopped"
	I0422 04:38:10.717293    6416 main.go:141] libmachine: (multinode-449000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid...
	I0422 04:38:10.717482    6416 main.go:141] libmachine: (multinode-449000) DBG | Using UUID 586ad748-6be9-44d4-8ddd-2786953ca4c9
	I0422 04:38:10.827549    6416 main.go:141] libmachine: (multinode-449000) DBG | Generated MAC 3e:5c:84:88:5b:2b
	I0422 04:38:10.827575    6416 main.go:141] libmachine: (multinode-449000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000
	I0422 04:38:10.827703    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"586ad748-6be9-44d4-8ddd-2786953ca4c9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b15c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0422 04:38:10.827733    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"586ad748-6be9-44d4-8ddd-2786953ca4c9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b15c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0422 04:38:10.827793    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "586ad748-6be9-44d4-8ddd-2786953ca4c9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/multinode-449000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/1871
1-1033/.minikube/machines/multinode-449000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"}
	I0422 04:38:10.827825    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 586ad748-6be9-44d4-8ddd-2786953ca4c9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/multinode-449000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"
	I0422 04:38:10.827841    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0422 04:38:10.829342    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 DEBUG: hyperkit: Pid is 6429
	I0422 04:38:10.829707    6416 main.go:141] libmachine: (multinode-449000) DBG | Attempt 0
	I0422 04:38:10.829720    6416 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.829787    6416 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 6429
	I0422 04:38:10.831421    6416 main.go:141] libmachine: (multinode-449000) DBG | Searching for 3e:5c:84:88:5b:2b in /var/db/dhcpd_leases ...
	I0422 04:38:10.831501    6416 main.go:141] libmachine: (multinode-449000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0422 04:38:10.831518    6416 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 04:38:10.831540    6416 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279d43}
	I0422 04:38:10.831555    6416 main.go:141] libmachine: (multinode-449000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279ca6}
	I0422 04:38:10.831562    6416 main.go:141] libmachine: (multinode-449000) DBG | Found match: 3e:5c:84:88:5b:2b
	I0422 04:38:10.831566    6416 main.go:141] libmachine: (multinode-449000) DBG | IP: 192.169.0.16
	I0422 04:38:10.831599    6416 main.go:141] libmachine: (multinode-449000) Calling .GetConfigRaw
	I0422 04:38:10.832231    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:10.832383    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:10.832765    6416 machine.go:94] provisionDockerMachine start ...
	I0422 04:38:10.832776    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:10.832900    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:10.832988    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:10.833079    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:10.833169    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:10.833261    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:10.833384    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:10.833572    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:10.833579    6416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 04:38:10.837041    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0422 04:38:10.890624    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0422 04:38:10.891322    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:10.891342    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:10.891353    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:10.891361    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:11.268602    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0422 04:38:11.268615    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0422 04:38:11.383528    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:11.383546    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:11.383558    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:11.383571    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:11.384537    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0422 04:38:11.384549    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0422 04:38:16.643459    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0422 04:38:16.643515    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0422 04:38:16.643527    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0422 04:38:16.667459    6416 main.go:141] libmachine: (multinode-449000) DBG | 2024/04/22 04:38:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0422 04:38:21.903644    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 04:38:21.903661    6416 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I0422 04:38:21.903793    6416 buildroot.go:166] provisioning hostname "multinode-449000"
	I0422 04:38:21.903802    6416 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I0422 04:38:21.903888    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:21.903992    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:21.904101    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.904188    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.904320    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:21.904442    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:21.904588    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:21.904600    6416 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000 && echo "multinode-449000" | sudo tee /etc/hostname
	I0422 04:38:21.971569    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000
	
	I0422 04:38:21.971595    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:21.971731    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:21.971831    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.971922    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:21.972011    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:21.972141    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:21.972284    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:21.972295    6416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 04:38:22.037323    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 04:38:22.037350    6416 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18711-1033/.minikube CaCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18711-1033/.minikube}
	I0422 04:38:22.037367    6416 buildroot.go:174] setting up certificates
	I0422 04:38:22.037374    6416 provision.go:84] configureAuth start
	I0422 04:38:22.037380    6416 main.go:141] libmachine: (multinode-449000) Calling .GetMachineName
	I0422 04:38:22.037516    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:22.037614    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.037712    6416 provision.go:143] copyHostCerts
	I0422 04:38:22.037744    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:38:22.037812    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem, removing ...
	I0422 04:38:22.037820    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:38:22.037947    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem (1082 bytes)
	I0422 04:38:22.038158    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:38:22.038199    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem, removing ...
	I0422 04:38:22.038204    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:38:22.038293    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem (1123 bytes)
	I0422 04:38:22.038447    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:38:22.038487    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem, removing ...
	I0422 04:38:22.038492    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:38:22.038571    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem (1675 bytes)
	I0422 04:38:22.038729    6416 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem org=jenkins.multinode-449000 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-449000]
	I0422 04:38:22.288976    6416 provision.go:177] copyRemoteCerts
	I0422 04:38:22.289045    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 04:38:22.289061    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.289250    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.289387    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.289552    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.289728    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:22.326188    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 04:38:22.326259    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0422 04:38:22.345869    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 04:38:22.345939    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0422 04:38:22.365184    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 04:38:22.365245    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 04:38:22.384572    6416 provision.go:87] duration metric: took 347.183732ms to configureAuth
	I0422 04:38:22.384586    6416 buildroot.go:189] setting minikube options for container-runtime
	I0422 04:38:22.384747    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:22.384780    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:22.384915    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.385016    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.385092    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.385179    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.385267    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.385392    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:22.385591    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:22.385600    6416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 04:38:22.442573    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0422 04:38:22.442585    6416 buildroot.go:70] root file system type: tmpfs
	I0422 04:38:22.442651    6416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 04:38:22.442664    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.442789    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.442871    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.442958    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.443072    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.443225    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:22.443357    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:22.443405    6416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 04:38:22.512640    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 04:38:22.512660    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:22.512796    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:22.512899    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.512984    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:22.513080    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:22.513216    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:22.513363    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:22.513377    6416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 04:38:24.188655    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0422 04:38:24.188669    6416 machine.go:97] duration metric: took 13.355824894s to provisionDockerMachine
	I0422 04:38:24.188682    6416 start.go:293] postStartSetup for "multinode-449000" (driver="hyperkit")
	I0422 04:38:24.188690    6416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 04:38:24.188702    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.188878    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 04:38:24.188902    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.189005    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.189091    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.189171    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.189261    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:24.226328    6416 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 04:38:24.229232    6416 command_runner.go:130] > NAME=Buildroot
	I0422 04:38:24.229244    6416 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 04:38:24.229250    6416 command_runner.go:130] > ID=buildroot
	I0422 04:38:24.229257    6416 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 04:38:24.229265    6416 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 04:38:24.229393    6416 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 04:38:24.229405    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/addons for local assets ...
	I0422 04:38:24.229504    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/files for local assets ...
	I0422 04:38:24.229694    6416 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> 14842.pem in /etc/ssl/certs
	I0422 04:38:24.229700    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> /etc/ssl/certs/14842.pem
	I0422 04:38:24.229905    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 04:38:24.237775    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /etc/ssl/certs/14842.pem (1708 bytes)
	I0422 04:38:24.256541    6416 start.go:296] duration metric: took 67.851408ms for postStartSetup
	I0422 04:38:24.256563    6416 fix.go:56] duration metric: took 13.617856509s for fixHost
	I0422 04:38:24.256575    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.256706    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.256802    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.256895    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.256967    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.257074    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:24.257215    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0422 04:38:24.257222    6416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 04:38:24.315363    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713785904.473126148
	
	I0422 04:38:24.315375    6416 fix.go:216] guest clock: 1713785904.473126148
	I0422 04:38:24.315380    6416 fix.go:229] Guest: 2024-04-22 04:38:24.473126148 -0700 PDT Remote: 2024-04-22 04:38:24.256566 -0700 PDT m=+14.050727463 (delta=216.560148ms)
	I0422 04:38:24.315396    6416 fix.go:200] guest clock delta is within tolerance: 216.560148ms
	I0422 04:38:24.315401    6416 start.go:83] releasing machines lock for "multinode-449000", held for 13.676725524s
	I0422 04:38:24.315421    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.315568    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:24.315664    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.316019    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.316120    6416 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:38:24.316191    6416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 04:38:24.316222    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.316257    6416 ssh_runner.go:195] Run: cat /version.json
	I0422 04:38:24.316268    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:38:24.316316    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.316353    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:38:24.316410    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.316439    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:38:24.316486    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.316525    6416 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:38:24.316572    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:24.316620    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:38:24.348077    6416 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0422 04:38:24.348180    6416 ssh_runner.go:195] Run: systemctl --version
	I0422 04:38:24.396154    6416 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 04:38:24.396617    6416 command_runner.go:130] > systemd 252 (252)
	I0422 04:38:24.396654    6416 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0422 04:38:24.396765    6416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 04:38:24.402095    6416 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 04:38:24.402152    6416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 04:38:24.402190    6416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 04:38:24.414497    6416 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0422 04:38:24.414528    6416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 04:38:24.414535    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:38:24.414635    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:38:24.429342    6416 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0422 04:38:24.429595    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0422 04:38:24.437952    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 04:38:24.446259    6416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 04:38:24.446300    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 04:38:24.454738    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:38:24.463080    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 04:38:24.471637    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:38:24.480009    6416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 04:38:24.488561    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 04:38:24.497065    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 04:38:24.505465    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 04:38:24.514035    6416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 04:38:24.521603    6416 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 04:38:24.521671    6416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 04:38:24.529437    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:24.637449    6416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 04:38:24.655839    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:38:24.655917    6416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 04:38:24.673150    6416 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0422 04:38:24.673163    6416 command_runner.go:130] > [Unit]
	I0422 04:38:24.673169    6416 command_runner.go:130] > Description=Docker Application Container Engine
	I0422 04:38:24.673183    6416 command_runner.go:130] > Documentation=https://docs.docker.com
	I0422 04:38:24.673188    6416 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0422 04:38:24.673192    6416 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0422 04:38:24.673196    6416 command_runner.go:130] > StartLimitBurst=3
	I0422 04:38:24.673200    6416 command_runner.go:130] > StartLimitIntervalSec=60
	I0422 04:38:24.673203    6416 command_runner.go:130] > [Service]
	I0422 04:38:24.673206    6416 command_runner.go:130] > Type=notify
	I0422 04:38:24.673210    6416 command_runner.go:130] > Restart=on-failure
	I0422 04:38:24.673216    6416 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0422 04:38:24.673223    6416 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0422 04:38:24.673230    6416 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0422 04:38:24.673236    6416 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0422 04:38:24.673241    6416 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0422 04:38:24.673247    6416 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0422 04:38:24.673253    6416 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0422 04:38:24.673264    6416 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0422 04:38:24.673270    6416 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0422 04:38:24.673279    6416 command_runner.go:130] > ExecStart=
	I0422 04:38:24.673291    6416 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0422 04:38:24.673296    6416 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0422 04:38:24.673303    6416 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0422 04:38:24.673309    6416 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0422 04:38:24.673312    6416 command_runner.go:130] > LimitNOFILE=infinity
	I0422 04:38:24.673316    6416 command_runner.go:130] > LimitNPROC=infinity
	I0422 04:38:24.673319    6416 command_runner.go:130] > LimitCORE=infinity
	I0422 04:38:24.673324    6416 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0422 04:38:24.673328    6416 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0422 04:38:24.673332    6416 command_runner.go:130] > TasksMax=infinity
	I0422 04:38:24.673335    6416 command_runner.go:130] > TimeoutStartSec=0
	I0422 04:38:24.673341    6416 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0422 04:38:24.673344    6416 command_runner.go:130] > Delegate=yes
	I0422 04:38:24.673349    6416 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0422 04:38:24.673353    6416 command_runner.go:130] > KillMode=process
	I0422 04:38:24.673356    6416 command_runner.go:130] > [Install]
	I0422 04:38:24.673365    6416 command_runner.go:130] > WantedBy=multi-user.target
	I0422 04:38:24.673434    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:38:24.685276    6416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 04:38:24.709796    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:38:24.724576    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:38:24.739589    6416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0422 04:38:24.761051    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:38:24.777401    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:38:24.796782    6416 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0422 04:38:24.797172    6416 ssh_runner.go:195] Run: which cri-dockerd
	I0422 04:38:24.800004    6416 command_runner.go:130] > /usr/bin/cri-dockerd
	I0422 04:38:24.800148    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 04:38:24.808594    6416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 04:38:24.821942    6416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 04:38:24.923982    6416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 04:38:25.041199    6416 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 04:38:25.041277    6416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 04:38:25.055516    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:25.153764    6416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 04:38:27.475204    6416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.321409526s)
	I0422 04:38:27.475263    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0422 04:38:27.486848    6416 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0422 04:38:27.500729    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 04:38:27.511120    6416 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0422 04:38:27.609886    6416 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0422 04:38:27.709696    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:27.818455    6416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0422 04:38:27.832514    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 04:38:27.843827    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:27.946861    6416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0422 04:38:28.005885    6416 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0422 04:38:28.005983    6416 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0422 04:38:28.010314    6416 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0422 04:38:28.010327    6416 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0422 04:38:28.010344    6416 command_runner.go:130] > Device: 0,22	Inode: 757         Links: 1
	I0422 04:38:28.010353    6416 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0422 04:38:28.010358    6416 command_runner.go:130] > Access: 2024-04-22 11:38:28.117421263 +0000
	I0422 04:38:28.010363    6416 command_runner.go:130] > Modify: 2024-04-22 11:38:28.117421263 +0000
	I0422 04:38:28.010368    6416 command_runner.go:130] > Change: 2024-04-22 11:38:28.119421095 +0000
	I0422 04:38:28.010372    6416 command_runner.go:130] >  Birth: -
	I0422 04:38:28.010437    6416 start.go:562] Will wait 60s for crictl version
	I0422 04:38:28.010483    6416 ssh_runner.go:195] Run: which crictl
	I0422 04:38:28.013358    6416 command_runner.go:130] > /usr/bin/crictl
	I0422 04:38:28.013570    6416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 04:38:28.042737    6416 command_runner.go:130] > Version:  0.1.0
	I0422 04:38:28.042763    6416 command_runner.go:130] > RuntimeName:  docker
	I0422 04:38:28.042768    6416 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0422 04:38:28.042772    6416 command_runner.go:130] > RuntimeApiVersion:  v1
	I0422 04:38:28.043795    6416 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0422 04:38:28.043861    6416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 04:38:28.061054    6416 command_runner.go:130] > 26.0.1
	I0422 04:38:28.061844    6416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 04:38:28.077996    6416 command_runner.go:130] > 26.0.1
	I0422 04:38:28.123584    6416 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0422 04:38:28.123633    6416 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:38:28.124044    6416 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0422 04:38:28.128797    6416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 04:38:28.139430    6416 kubeadm.go:877] updating cluster {Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode
-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metri
cs-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s} ...
	I0422 04:38:28.139513    6416 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 04:38:28.139574    6416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 04:38:28.159585    6416 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0422 04:38:28.159598    6416 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 04:38:28.159601    6416 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0422 04:38:28.159606    6416 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0422 04:38:28.159609    6416 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0422 04:38:28.159613    6416 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0422 04:38:28.159617    6416 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0422 04:38:28.159621    6416 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0422 04:38:28.159625    6416 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 04:38:28.159629    6416 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0422 04:38:28.160212    6416 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0422 04:38:28.160222    6416 docker.go:615] Images already preloaded, skipping extraction
	I0422 04:38:28.160287    6416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 04:38:28.175656    6416 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0422 04:38:28.175672    6416 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 04:38:28.175676    6416 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0422 04:38:28.175680    6416 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0422 04:38:28.175684    6416 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0422 04:38:28.175687    6416 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0422 04:38:28.175693    6416 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0422 04:38:28.175699    6416 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0422 04:38:28.175706    6416 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 04:38:28.175712    6416 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0422 04:38:28.175755    6416 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0422 04:38:28.175768    6416 cache_images.go:84] Images are preloaded, skipping loading
	I0422 04:38:28.175777    6416 kubeadm.go:928] updating node { 192.169.0.16 8443 v1.30.0 docker true true} ...
	I0422 04:38:28.175851    6416 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-449000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 04:38:28.175913    6416 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0422 04:38:28.192840    6416 command_runner.go:130] > cgroupfs
	I0422 04:38:28.193474    6416 cni.go:84] Creating CNI manager for ""
	I0422 04:38:28.193485    6416 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0422 04:38:28.193496    6416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 04:38:28.193512    6416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.16 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-449000 NodeName:multinode-449000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 04:38:28.193598    6416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-449000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 04:38:28.193662    6416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 04:38:28.201773    6416 command_runner.go:130] > kubeadm
	I0422 04:38:28.201781    6416 command_runner.go:130] > kubectl
	I0422 04:38:28.201785    6416 command_runner.go:130] > kubelet
	I0422 04:38:28.201888    6416 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 04:38:28.201931    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 04:38:28.209885    6416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0422 04:38:28.223696    6416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 04:38:28.236998    6416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0422 04:38:28.250595    6416 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0422 04:38:28.253512    6416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 04:38:28.263052    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:28.375325    6416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 04:38:28.390337    6416 certs.go:68] Setting up /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000 for IP: 192.169.0.16
	I0422 04:38:28.390351    6416 certs.go:194] generating shared ca certs ...
	I0422 04:38:28.390365    6416 certs.go:226] acquiring lock for ca certs: {Name:mk61c76ef71e4ac1dee0d1c0b2031f8bdb3ae618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:28.390542    6416 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.key
	I0422 04:38:28.390612    6416 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.key
	I0422 04:38:28.390624    6416 certs.go:256] generating profile certs ...
	I0422 04:38:28.390724    6416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/client.key
	I0422 04:38:28.390806    6416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.key.36931f31
	I0422 04:38:28.390886    6416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.key
	I0422 04:38:28.390893    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 04:38:28.390915    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 04:38:28.390933    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 04:38:28.390951    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 04:38:28.390969    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 04:38:28.390998    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 04:38:28.391026    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 04:38:28.391045    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 04:38:28.391154    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484.pem (1338 bytes)
	W0422 04:38:28.391201    6416 certs.go:480] ignoring /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484_empty.pem, impossibly tiny 0 bytes
	I0422 04:38:28.391209    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 04:38:28.391243    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem (1082 bytes)
	I0422 04:38:28.391280    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem (1123 bytes)
	I0422 04:38:28.391309    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem (1675 bytes)
	I0422 04:38:28.391381    6416 certs.go:484] found cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem (1708 bytes)
	I0422 04:38:28.391416    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.391450    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.391470    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484.pem -> /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.391931    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 04:38:28.433213    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0422 04:38:28.459785    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 04:38:28.482771    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 04:38:28.504810    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 04:38:28.525273    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 04:38:28.545136    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 04:38:28.565757    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 04:38:28.585678    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /usr/share/ca-certificates/14842.pem (1708 bytes)
	I0422 04:38:28.605783    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 04:38:28.625729    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/1484.pem --> /usr/share/ca-certificates/1484.pem (1338 bytes)
	I0422 04:38:28.645804    6416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 04:38:28.659496    6416 ssh_runner.go:195] Run: openssl version
	I0422 04:38:28.663618    6416 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0422 04:38:28.663762    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14842.pem && ln -fs /usr/share/ca-certificates/14842.pem /etc/ssl/certs/14842.pem"
	I0422 04:38:28.672105    6416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.675446    6416 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 22 10:45 /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.675571    6416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:45 /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.675614    6416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14842.pem
	I0422 04:38:28.679705    6416 command_runner.go:130] > 3ec20f2e
	I0422 04:38:28.679843    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 04:38:28.688233    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 04:38:28.696714    6416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.700071    6416 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 22 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.700137    6416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.700171    6416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 04:38:28.704336    6416 command_runner.go:130] > b5213941
	I0422 04:38:28.704507    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 04:38:28.712810    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484.pem && ln -fs /usr/share/ca-certificates/1484.pem /etc/ssl/certs/1484.pem"
	I0422 04:38:28.721043    6416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.724265    6416 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 22 10:45 /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.724345    6416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:45 /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.724381    6416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484.pem
	I0422 04:38:28.728520    6416 command_runner.go:130] > 51391683
	I0422 04:38:28.728643    6416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1484.pem /etc/ssl/certs/51391683.0"
	I0422 04:38:28.737033    6416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 04:38:28.740271    6416 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 04:38:28.740282    6416 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0422 04:38:28.740286    6416 command_runner.go:130] > Device: 253,1	Inode: 4196178     Links: 1
	I0422 04:38:28.740291    6416 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 04:38:28.740297    6416 command_runner.go:130] > Access: 2024-04-22 11:36:05.475707495 +0000
	I0422 04:38:28.740302    6416 command_runner.go:130] > Modify: 2024-04-22 11:29:04.616277157 +0000
	I0422 04:38:28.740306    6416 command_runner.go:130] > Change: 2024-04-22 11:29:04.616277157 +0000
	I0422 04:38:28.740310    6416 command_runner.go:130] >  Birth: 2024-04-22 11:29:04.615277214 +0000
	I0422 04:38:28.740411    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 04:38:28.744636    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.744743    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 04:38:28.748846    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.748976    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 04:38:28.753077    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.753210    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 04:38:28.757404    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.757528    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 04:38:28.761638    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.761800    6416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 04:38:28.765925    6416 command_runner.go:130] > Certificate will not expire
	I0422 04:38:28.766126    6416 kubeadm.go:391] StartCluster: {Name:multinode-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-44
9000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-
server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0422 04:38:28.766236    6416 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0422 04:38:28.777332    6416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 04:38:28.784688    6416 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0422 04:38:28.784698    6416 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0422 04:38:28.784702    6416 command_runner.go:130] > /var/lib/minikube/etcd:
	I0422 04:38:28.784705    6416 command_runner.go:130] > member
	W0422 04:38:28.784814    6416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 04:38:28.784822    6416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 04:38:28.784829    6416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 04:38:28.784866    6416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 04:38:28.792597    6416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 04:38:28.792898    6416 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-449000" does not appear in /Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:28.792981    6416 kubeconfig.go:62] /Users/jenkins/minikube-integration/18711-1033/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-449000" cluster setting kubeconfig missing "multinode-449000" context setting]
	I0422 04:38:28.793208    6416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18711-1033/kubeconfig: {Name:mkd60fed3a4688e81c1999ca37fdf35eadd19815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:28.793897    6416 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:28.794090    6416 kapi.go:59] client config for multinode-449000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/client.key", CAFile:"/Users/jenkins/minikube-integration/18711-1033/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7e5aa40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 04:38:28.794400    6416 cert_rotation.go:137] Starting client certificate rotation controller
	I0422 04:38:28.794564    6416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 04:38:28.801771    6416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.16
	I0422 04:38:28.801789    6416 kubeadm.go:1154] stopping kube-system containers ...
	I0422 04:38:28.801838    6416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0422 04:38:28.816657    6416 command_runner.go:130] > 7fd342a68d84
	I0422 04:38:28.816667    6416 command_runner.go:130] > c6d63c83b44a
	I0422 04:38:28.816671    6416 command_runner.go:130] > 429b0a81fe65
	I0422 04:38:28.816674    6416 command_runner.go:130] > d5b3b5d5a468
	I0422 04:38:28.816678    6416 command_runner.go:130] > 7ad82cc3e663
	I0422 04:38:28.816681    6416 command_runner.go:130] > 8fd92d3d559f
	I0422 04:38:28.816693    6416 command_runner.go:130] > d272ef1c679e
	I0422 04:38:28.816697    6416 command_runner.go:130] > 8fc5f2d8668e
	I0422 04:38:28.816700    6416 command_runner.go:130] > be4f0b4b588e
	I0422 04:38:28.816704    6416 command_runner.go:130] > 62b5721c79fa
	I0422 04:38:28.816707    6416 command_runner.go:130] > 1df263b70ea2
	I0422 04:38:28.816710    6416 command_runner.go:130] > 8ac986224699
	I0422 04:38:28.816713    6416 command_runner.go:130] > 4cbfdf285d1b
	I0422 04:38:28.816716    6416 command_runner.go:130] > d6f28e2bec07
	I0422 04:38:28.816724    6416 command_runner.go:130] > 46dba4d36ef7
	I0422 04:38:28.816727    6416 command_runner.go:130] > 84c0422896cc
	I0422 04:38:28.816730    6416 command_runner.go:130] > d0dcd3425466
	I0422 04:38:28.816734    6416 command_runner.go:130] > c20333287578
	I0422 04:38:28.816737    6416 command_runner.go:130] > d5f7a23a34fc
	I0422 04:38:28.816741    6416 command_runner.go:130] > f83965b353cb
	I0422 04:38:28.816744    6416 command_runner.go:130] > 8e1ff1cf8fb4
	I0422 04:38:28.816748    6416 command_runner.go:130] > 5a57671878b6
	I0422 04:38:28.816751    6416 command_runner.go:130] > af6978b977fc
	I0422 04:38:28.816755    6416 command_runner.go:130] > 1f77c8f168b4
	I0422 04:38:28.816758    6416 command_runner.go:130] > c2f38fcb314e
	I0422 04:38:28.816762    6416 command_runner.go:130] > 1113d226e35e
	I0422 04:38:28.816765    6416 command_runner.go:130] > 769ad1ec6855
	I0422 04:38:28.816768    6416 command_runner.go:130] > 3874d8a2aa4c
	I0422 04:38:28.816771    6416 command_runner.go:130] > 476f40892e40
	I0422 04:38:28.816775    6416 command_runner.go:130] > 782b924a6d7c
	I0422 04:38:28.816784    6416 command_runner.go:130] > f03a888f78dc
	I0422 04:38:28.817341    6416 docker.go:483] Stopping containers: [7fd342a68d84 c6d63c83b44a 429b0a81fe65 d5b3b5d5a468 7ad82cc3e663 8fd92d3d559f d272ef1c679e 8fc5f2d8668e be4f0b4b588e 62b5721c79fa 1df263b70ea2 8ac986224699 4cbfdf285d1b d6f28e2bec07 46dba4d36ef7 84c0422896cc d0dcd3425466 c20333287578 d5f7a23a34fc f83965b353cb 8e1ff1cf8fb4 5a57671878b6 af6978b977fc 1f77c8f168b4 c2f38fcb314e 1113d226e35e 769ad1ec6855 3874d8a2aa4c 476f40892e40 782b924a6d7c f03a888f78dc]
	I0422 04:38:28.817433    6416 ssh_runner.go:195] Run: docker stop 7fd342a68d84 c6d63c83b44a 429b0a81fe65 d5b3b5d5a468 7ad82cc3e663 8fd92d3d559f d272ef1c679e 8fc5f2d8668e be4f0b4b588e 62b5721c79fa 1df263b70ea2 8ac986224699 4cbfdf285d1b d6f28e2bec07 46dba4d36ef7 84c0422896cc d0dcd3425466 c20333287578 d5f7a23a34fc f83965b353cb 8e1ff1cf8fb4 5a57671878b6 af6978b977fc 1f77c8f168b4 c2f38fcb314e 1113d226e35e 769ad1ec6855 3874d8a2aa4c 476f40892e40 782b924a6d7c f03a888f78dc
	I0422 04:38:28.827936    6416 command_runner.go:130] > 7fd342a68d84
	I0422 04:38:28.828422    6416 command_runner.go:130] > c6d63c83b44a
	I0422 04:38:28.828430    6416 command_runner.go:130] > 429b0a81fe65
	I0422 04:38:28.828434    6416 command_runner.go:130] > d5b3b5d5a468
	I0422 04:38:28.828438    6416 command_runner.go:130] > 7ad82cc3e663
	I0422 04:38:28.828464    6416 command_runner.go:130] > 8fd92d3d559f
	I0422 04:38:28.828982    6416 command_runner.go:130] > d272ef1c679e
	I0422 04:38:28.828988    6416 command_runner.go:130] > 8fc5f2d8668e
	I0422 04:38:28.829668    6416 command_runner.go:130] > be4f0b4b588e
	I0422 04:38:28.832080    6416 command_runner.go:130] > 62b5721c79fa
	I0422 04:38:28.832193    6416 command_runner.go:130] > 1df263b70ea2
	I0422 04:38:28.832198    6416 command_runner.go:130] > 8ac986224699
	I0422 04:38:28.832202    6416 command_runner.go:130] > 4cbfdf285d1b
	I0422 04:38:28.832205    6416 command_runner.go:130] > d6f28e2bec07
	I0422 04:38:28.832209    6416 command_runner.go:130] > 46dba4d36ef7
	I0422 04:38:28.832264    6416 command_runner.go:130] > 84c0422896cc
	I0422 04:38:28.832272    6416 command_runner.go:130] > d0dcd3425466
	I0422 04:38:28.832275    6416 command_runner.go:130] > c20333287578
	I0422 04:38:28.832293    6416 command_runner.go:130] > d5f7a23a34fc
	I0422 04:38:28.832300    6416 command_runner.go:130] > f83965b353cb
	I0422 04:38:28.832303    6416 command_runner.go:130] > 8e1ff1cf8fb4
	I0422 04:38:28.832711    6416 command_runner.go:130] > 5a57671878b6
	I0422 04:38:28.832716    6416 command_runner.go:130] > af6978b977fc
	I0422 04:38:28.832720    6416 command_runner.go:130] > 1f77c8f168b4
	I0422 04:38:28.832723    6416 command_runner.go:130] > c2f38fcb314e
	I0422 04:38:28.832726    6416 command_runner.go:130] > 1113d226e35e
	I0422 04:38:28.832729    6416 command_runner.go:130] > 769ad1ec6855
	I0422 04:38:28.832732    6416 command_runner.go:130] > 3874d8a2aa4c
	I0422 04:38:28.832735    6416 command_runner.go:130] > 476f40892e40
	I0422 04:38:28.832927    6416 command_runner.go:130] > 782b924a6d7c
	I0422 04:38:28.832932    6416 command_runner.go:130] > f03a888f78dc
	I0422 04:38:28.833558    6416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 04:38:28.846289    6416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 04:38:28.853726    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0422 04:38:28.853737    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0422 04:38:28.853744    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0422 04:38:28.853753    6416 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 04:38:28.853806    6416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 04:38:28.853814    6416 kubeadm.go:156] found existing configuration files:
	
	I0422 04:38:28.853856    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 04:38:28.860708    6416 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 04:38:28.860722    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 04:38:28.860761    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 04:38:28.868014    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 04:38:28.874999    6416 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 04:38:28.875063    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 04:38:28.875096    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 04:38:28.882687    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 04:38:28.889607    6416 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 04:38:28.889625    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 04:38:28.889659    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 04:38:28.897098    6416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 04:38:28.904160    6416 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 04:38:28.904182    6416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 04:38:28.904219    6416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 04:38:28.911388    6416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 04:38:28.918978    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:28.983428    6416 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 04:38:28.983464    6416 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0422 04:38:28.983681    6416 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0422 04:38:28.983810    6416 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 04:38:28.984183    6416 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0422 04:38:28.984307    6416 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0422 04:38:28.984715    6416 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0422 04:38:28.984893    6416 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0422 04:38:28.985217    6416 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0422 04:38:28.985282    6416 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 04:38:28.985450    6416 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 04:38:28.986394    6416 command_runner.go:130] > [certs] Using the existing "sa" key
	I0422 04:38:28.986556    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:29.784155    6416 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 04:38:29.784183    6416 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 04:38:29.784214    6416 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 04:38:29.784219    6416 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 04:38:29.784225    6416 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 04:38:29.784230    6416 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 04:38:29.784356    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:29.833794    6416 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 04:38:29.834496    6416 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 04:38:29.834607    6416 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0422 04:38:29.945288    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:30.014615    6416 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 04:38:30.014631    6416 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 04:38:30.016187    6416 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 04:38:30.017381    6416 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 04:38:30.019427    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:30.090821    6416 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 04:38:30.094334    6416 api_server.go:52] waiting for apiserver process to appear ...
	I0422 04:38:30.094395    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:30.596643    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:31.094655    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:31.596495    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:32.094700    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:32.106469    6416 command_runner.go:130] > 1523
	I0422 04:38:32.106682    6416 api_server.go:72] duration metric: took 2.01234244s to wait for apiserver process to appear ...
	I0422 04:38:32.106702    6416 api_server.go:88] waiting for apiserver healthz status ...
	I0422 04:38:32.106719    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:34.210139    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 04:38:34.210157    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 04:38:34.210166    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:34.246095    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 04:38:34.246114    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 04:38:34.607521    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:34.611305    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 04:38:34.611319    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 04:38:35.108108    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:35.112157    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 04:38:35.112169    6416 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 04:38:35.608732    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:35.613635    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0422 04:38:35.613698    6416 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0422 04:38:35.613706    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:35.613713    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:35.613717    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:35.618517    6416 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 04:38:35.618530    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:35.618535    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:35.618538    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:35.618541    6416 round_trippers.go:580]     Content-Length: 263
	I0422 04:38:35.618549    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:35 GMT
	I0422 04:38:35.618552    6416 round_trippers.go:580]     Audit-Id: c529170c-3b23-45b4-b999-02e57985832e
	I0422 04:38:35.618556    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:35.618558    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:35.618581    6416 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0422 04:38:35.618663    6416 api_server.go:141] control plane version: v1.30.0
	I0422 04:38:35.618674    6416 api_server.go:131] duration metric: took 3.511948374s to wait for apiserver health ...
	I0422 04:38:35.618682    6416 cni.go:84] Creating CNI manager for ""
	I0422 04:38:35.618686    6416 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0422 04:38:35.642438    6416 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0422 04:38:35.663231    6416 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0422 04:38:35.668943    6416 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0422 04:38:35.668963    6416 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0422 04:38:35.668972    6416 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0422 04:38:35.669009    6416 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 04:38:35.669017    6416 command_runner.go:130] > Access: 2024-04-22 11:38:20.770719391 +0000
	I0422 04:38:35.669022    6416 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0422 04:38:35.669028    6416 command_runner.go:130] > Change: 2024-04-22 11:38:18.653478647 +0000
	I0422 04:38:35.669031    6416 command_runner.go:130] >  Birth: -
	I0422 04:38:35.669161    6416 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0422 04:38:35.669169    6416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0422 04:38:35.696431    6416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0422 04:38:36.251346    6416 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0422 04:38:36.251361    6416 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0422 04:38:36.251367    6416 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0422 04:38:36.251371    6416 command_runner.go:130] > daemonset.apps/kindnet configured
	I0422 04:38:36.251471    6416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 04:38:36.251526    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:36.251537    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.251547    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.251551    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.255055    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:36.255070    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.255078    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.255098    6416 round_trippers.go:580]     Audit-Id: e5a8b559-1f3e-4cf9-b695-523472bc9bd4
	I0422 04:38:36.255108    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.255112    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.255116    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.255122    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.255972    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1206"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 81186 chars]
	I0422 04:38:36.258779    6416 system_pods.go:59] 11 kube-system pods found
	I0422 04:38:36.258797    6416 system_pods.go:61] "coredns-7db6d8ff4d-tnr9d" [20633bf5-f995-44a1-b778-441b906496cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 04:38:36.258803    6416 system_pods.go:61] "etcd-multinode-449000" [ff3afd40-3400-4293-9fe4-03d22b8aba13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 04:38:36.258808    6416 system_pods.go:61] "kindnet-jkzvq" [1c07681b-b4af-41b9-917c-01183dcd9e7f] Running
	I0422 04:38:36.258812    6416 system_pods.go:61] "kindnet-pbqsb" [f1537c83-ca18-43b9-8fc5-91de97ef1d76] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0422 04:38:36.258817    6416 system_pods.go:61] "kindnet-sm2l6" [9c708c64-7f5e-4502-9381-d97e024ea343] Running
	I0422 04:38:36.258821    6416 system_pods.go:61] "kube-apiserver-multinode-449000" [cc0086bd-2049-4d09-a498-d26cc78b6968] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 04:38:36.258825    6416 system_pods.go:61] "kube-controller-manager-multinode-449000" [7d730ce3-3f6c-4cc8-aff2-bbcf584056c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 04:38:36.258829    6416 system_pods.go:61] "kube-proxy-4q52c" [764856b1-b523-4b58-8a33-6b81ab928c79] Running
	I0422 04:38:36.258833    6416 system_pods.go:61] "kube-proxy-jrtv2" [e6078b93-4180-484d-b486-9ddf193ba84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 04:38:36.258837    6416 system_pods.go:61] "kube-proxy-lx9ft" [38104bb7-7d9e-4377-9912-06cb23591941] Running
	I0422 04:38:36.258840    6416 system_pods.go:61] "storage-provisioner" [f286f444-3ade-4e54-85bb-8577f0234cca] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 04:38:36.258845    6416 system_pods.go:74] duration metric: took 7.366633ms to wait for pod list to return data ...
	I0422 04:38:36.258852    6416 node_conditions.go:102] verifying NodePressure condition ...
	I0422 04:38:36.258887    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0422 04:38:36.258892    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.258898    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.258903    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.261838    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:36.261873    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.261881    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.261885    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.261898    6416 round_trippers.go:580]     Audit-Id: 6d19ca39-035d-4de5-a620-8aec9edb6f3d
	I0422 04:38:36.261904    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.261908    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.261912    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.262077    6416 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1206"},"items":[{"metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0422 04:38:36.262498    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:36.262510    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:36.262520    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:36.262523    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:36.262527    6416 node_conditions.go:105] duration metric: took 3.670893ms to run NodePressure ...
	I0422 04:38:36.262536    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 04:38:36.390392    6416 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0422 04:38:36.522457    6416 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0422 04:38:36.523385    6416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 04:38:36.523447    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0422 04:38:36.523452    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.523458    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.523462    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.525657    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:36.525666    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.525671    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.525674    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.525676    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.525678    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.525682    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.525684    6416 round_trippers.go:580]     Audit-Id: 5892a5a0-1ae9-40c2-a378-55172958401f
	I0422 04:38:36.526045    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 24485 chars]
	I0422 04:38:36.526622    6416 kubeadm.go:733] kubelet initialised
	I0422 04:38:36.526631    6416 kubeadm.go:734] duration metric: took 3.234861ms waiting for restarted kubelet to initialise ...
	I0422 04:38:36.526637    6416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:36.526667    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:36.526672    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.526677    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.526681    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.528657    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.528666    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.528675    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.528680    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.528684    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.528687    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.528690    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.528692    6416 round_trippers.go:580]     Audit-Id: dcc787ff-e71e-474e-886f-273858aeb216
	I0422 04:38:36.529573    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 81186 chars]
	I0422 04:38:36.531295    6416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.531340    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:36.531346    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.531351    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.531355    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.532672    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.532679    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.532684    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.532687    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.532690    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.532693    6416 round_trippers.go:580]     Audit-Id: f0bdc464-0eb7-4ece-980d-716fad8074ec
	I0422 04:38:36.532697    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.532700    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.533013    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:36.533251    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.533258    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.533264    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.533269    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.534392    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.534401    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.534406    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.534411    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.534415    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.534419    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.534423    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.534425    6416 round_trippers.go:580]     Audit-Id: 48a43717-6a7b-499b-b64a-9061d3621bc3
	I0422 04:38:36.534598    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.534772    6416 pod_ready.go:97] node "multinode-449000" hosting pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.534782    6416 pod_ready.go:81] duration metric: took 3.47759ms for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.534799    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.534808    6416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.534842    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:36.534848    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.534854    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.534858    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.536076    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.536086    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.536093    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.536097    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.536104    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.536107    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.536110    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.536113    6416 round_trippers.go:580]     Audit-Id: d31041b8-f593-47b2-a556-c5c256a0cb70
	I0422 04:38:36.536311    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:36.536514    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.536520    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.536526    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.536530    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.537802    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.537810    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.537816    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.537822    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.537825    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.537829    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.537831    6416 round_trippers.go:580]     Audit-Id: c8b485d8-8aea-4e68-bed8-86a42c565330
	I0422 04:38:36.537834    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.538017    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.538192    6416 pod_ready.go:97] node "multinode-449000" hosting pod "etcd-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.538201    6416 pod_ready.go:81] duration metric: took 3.387018ms for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.538207    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "etcd-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.538217    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.538244    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I0422 04:38:36.538249    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.538254    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.538259    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.539435    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.539444    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.539449    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.539477    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.539485    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.539488    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.539491    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.539494    6416 round_trippers.go:580]     Audit-Id: 807b0c92-21fb-452a-bbd1-56e50b42618c
	I0422 04:38:36.539653    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"cc0086bd-2049-4d09-a498-d26cc78b6968","resourceVersion":"1194","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.mirror":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.seen":"2024-04-22T11:29:12.576362787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0422 04:38:36.539885    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.539891    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.539897    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.539901    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.541011    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.541021    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.541042    6416 round_trippers.go:580]     Audit-Id: e41fc13b-3c0d-4dba-b812-6e69e5e48e6f
	I0422 04:38:36.541052    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.541056    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.541059    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.541065    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.541067    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.541167    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.541335    6416 pod_ready.go:97] node "multinode-449000" hosting pod "kube-apiserver-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.541348    6416 pod_ready.go:81] duration metric: took 3.126115ms for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.541353    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-apiserver-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.541361    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.541388    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I0422 04:38:36.541393    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.541398    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.541402    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.542638    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.542646    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.542651    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.542655    6416 round_trippers.go:580]     Audit-Id: 6eb70293-fb76-432d-af2a-fad537691f3b
	I0422 04:38:36.542660    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.542665    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.542668    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.542670    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.542827    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"7d730ce3-3f6c-4cc8-aff2-bbcf584056c7","resourceVersion":"1193","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.mirror":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.seen":"2024-04-22T11:29:12.576363612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0422 04:38:36.653009    6416 request.go:629] Waited for 109.901067ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.653074    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:36.653096    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.653122    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.653129    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.654328    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:36.654340    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.654347    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.654354    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.654359    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:36 GMT
	I0422 04:38:36.654365    6416 round_trippers.go:580]     Audit-Id: 8b638333-4676-4716-9e39-c2a2c555a9a6
	I0422 04:38:36.654370    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.654374    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.654683    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:36.654874    6416 pod_ready.go:97] node "multinode-449000" hosting pod "kube-controller-manager-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.654884    6416 pod_ready.go:81] duration metric: took 113.51762ms for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:36.654891    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-controller-manager-multinode-449000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:36.654896    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:36.851611    6416 request.go:629] Waited for 196.667348ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q52c
	I0422 04:38:36.851699    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q52c
	I0422 04:38:36.851710    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:36.851722    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:36.851731    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:36.854315    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:36.854328    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:36.854335    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:36.854340    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:36.854344    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:36.854349    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:36.854353    6416 round_trippers.go:580]     Audit-Id: 3c9c16c7-f078-479e-b7ac-5ecc7f6f6364
	I0422 04:38:36.854357    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:36.854743    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4q52c","generateName":"kube-proxy-","namespace":"kube-system","uid":"764856b1-b523-4b58-8a33-6b81ab928c79","resourceVersion":"1162","creationTimestamp":"2024-04-22T11:32:35Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:37.052551    6416 request.go:629] Waited for 197.551006ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m03
	I0422 04:38:37.052728    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m03
	I0422 04:38:37.052740    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.052752    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.052758    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.055373    6416 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0422 04:38:37.055391    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.055399    6416 round_trippers.go:580]     Content-Length: 210
	I0422 04:38:37.055411    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.055417    6416 round_trippers.go:580]     Audit-Id: 05ed52ac-7276-41fd-901f-76455ea13c24
	I0422 04:38:37.055421    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.055425    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.055429    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.055434    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.055463    6416 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-449000-m03\" not found","reason":"NotFound","details":{"name":"multinode-449000-m03","kind":"nodes"},"code":404}
	I0422 04:38:37.055598    6416 pod_ready.go:97] node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:37.055617    6416 pod_ready.go:81] duration metric: took 400.713666ms for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:37.055627    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:37.055634    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:37.252016    6416 request.go:629] Waited for 196.330827ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jrtv2
	I0422 04:38:37.252079    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jrtv2
	I0422 04:38:37.252132    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.252143    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.252150    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.254673    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:37.254686    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.254694    6416 round_trippers.go:580]     Audit-Id: 116d4872-cccb-42be-98a3-b84be6adc79b
	I0422 04:38:37.254699    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.254704    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.254708    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.254713    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.254717    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.254889    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jrtv2","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6078b93-4180-484d-b486-9ddf193ba84e","resourceVersion":"1210","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0422 04:38:37.452641    6416 request.go:629] Waited for 197.411736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:37.452684    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:37.452691    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.452699    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.452704    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.455071    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:37.455084    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.455091    6416 round_trippers.go:580]     Audit-Id: f9ad74d9-00cf-4bf3-a98f-0acd5c5bc98e
	I0422 04:38:37.455096    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.455101    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.455106    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.455111    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.455114    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.455269    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1190","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0422 04:38:37.455517    6416 pod_ready.go:97] node "multinode-449000" hosting pod "kube-proxy-jrtv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:37.455531    6416 pod_ready.go:81] duration metric: took 399.887492ms for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:37.455540    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000" hosting pod "kube-proxy-jrtv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-449000" has status "Ready":"False"
	I0422 04:38:37.455546    6416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:37.651867    6416 request.go:629] Waited for 196.268793ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:37.651926    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:37.651935    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.651977    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.651984    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.654765    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:37.654782    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.654789    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.654799    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.654825    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:37 GMT
	I0422 04:38:37.654837    6416 round_trippers.go:580]     Audit-Id: 6351e0ca-778b-4663-a525-703f77101695
	I0422 04:38:37.654842    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.654847    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.654945    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lx9ft","generateName":"kube-proxy-","namespace":"kube-system","uid":"38104bb7-7d9e-4377-9912-06cb23591941","resourceVersion":"1031","creationTimestamp":"2024-04-22T11:31:54Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:31:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:37.852318    6416 request.go:629] Waited for 197.053333ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:37.852353    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:37.852373    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:37.852379    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:37.852384    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:37.853907    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:37.853920    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:37.853926    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:37.853931    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:37.853934    6416 round_trippers.go:580]     Audit-Id: 28ed923c-5450-4d7c-aaba-08ec83f366c0
	I0422 04:38:37.853937    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:37.853940    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:37.853943    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:37.854049    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000-m02","uid":"cf524355-0b8a-4495-8a18-e4d0f38226d6","resourceVersion":"1048","creationTimestamp":"2024-04-22T11:36:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_22T04_36_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:36:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0422 04:38:37.854222    6416 pod_ready.go:92] pod "kube-proxy-lx9ft" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:37.854231    6416 pod_ready.go:81] duration metric: took 398.676282ms for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:37.854238    6416 pod_ready.go:38] duration metric: took 1.327587035s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:37.854250    6416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 04:38:37.863339    6416 command_runner.go:130] > -16
	I0422 04:38:37.863586    6416 ops.go:34] apiserver oom_adj: -16
	I0422 04:38:37.863593    6416 kubeadm.go:591] duration metric: took 9.078711531s to restartPrimaryControlPlane
	I0422 04:38:37.863599    6416 kubeadm.go:393] duration metric: took 9.097428647s to StartCluster
	I0422 04:38:37.863608    6416 settings.go:142] acquiring lock: {Name:mk90f0ef82bf791c6c0ccd9a8a16931fa57323b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:37.863686    6416 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:38:37.864075    6416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18711-1033/kubeconfig: {Name:mkd60fed3a4688e81c1999ca37fdf35eadd19815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 04:38:37.864331    6416 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0422 04:38:37.887491    6416 out.go:177] * Verifying Kubernetes components...
	I0422 04:38:37.864345    6416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 04:38:37.864455    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:37.950305    6416 out.go:177] * Enabled addons: 
	I0422 04:38:37.929535    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:38:37.971256    6416 addons.go:505] duration metric: took 106.914277ms for enable addons: enabled=[]
	I0422 04:38:38.118912    6416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 04:38:38.131925    6416 node_ready.go:35] waiting up to 6m0s for node "multinode-449000" to be "Ready" ...
	I0422 04:38:38.131981    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:38.131986    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.131999    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.132001    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.133349    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:38.133364    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.133371    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.133378    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.133382    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.133385    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.133387    6416 round_trippers.go:580]     Audit-Id: 5ee4bdeb-d4a9-4eab-94b0-b9477257f16d
	I0422 04:38:38.133390    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.133498    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:38.133689    6416 node_ready.go:49] node "multinode-449000" has status "Ready":"True"
	I0422 04:38:38.133701    6416 node_ready.go:38] duration metric: took 1.757174ms for node "multinode-449000" to be "Ready" ...
	I0422 04:38:38.133707    6416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:38.251743    6416 request.go:629] Waited for 117.992667ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:38.251873    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:38.251886    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.251897    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.251903    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.255540    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:38.255555    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.255561    6416 round_trippers.go:580]     Audit-Id: 755d3e42-dbf4-4c7e-8245-315286a2aa5a
	I0422 04:38:38.255564    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.255567    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.255571    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.255585    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.255591    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.256136    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1212"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 80593 chars]
	I0422 04:38:38.257861    6416 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:38.452657    6416 request.go:629] Waited for 194.724743ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.452778    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.452789    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.452800    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.452808    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.455531    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:38.455543    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.455550    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.455555    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.455559    6416 round_trippers.go:580]     Audit-Id: f0ff366b-ed5d-476a-89c4-ea17d980f532
	I0422 04:38:38.455562    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.455566    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.455570    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.455770    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:38.653655    6416 request.go:629] Waited for 197.490451ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:38.653742    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:38.653753    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.653767    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.653776    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.655850    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:38.655864    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.655871    6416 round_trippers.go:580]     Audit-Id: 6bf53d69-08ea-44ac-896f-d75eba5177d1
	I0422 04:38:38.655900    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.655908    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.655913    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.655918    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.655921    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:38 GMT
	I0422 04:38:38.656123    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:38.851891    6416 request.go:629] Waited for 93.190651ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.852023    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:38.852034    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:38.852044    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:38.852053    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:38.854517    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:38.854530    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:38.854537    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:38.854543    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:38.854547    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:38.854552    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:38.854557    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:38.854560    6416 round_trippers.go:580]     Audit-Id: b98d8805-0338-4f64-bb8e-2799febe32bd
	I0422 04:38:38.854632    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:39.051674    6416 request.go:629] Waited for 196.682688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.051769    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.051778    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.051784    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.051791    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.053679    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:39.053690    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.053695    6416 round_trippers.go:580]     Audit-Id: 639ff229-1ef8-4d32-b611-7d58c2823fac
	I0422 04:38:39.053698    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.053701    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.053704    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.053706    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.053708    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.053809    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:39.258645    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:39.258672    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.258680    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.258685    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.261192    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:39.261212    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.261221    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.261227    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.261231    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.261235    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.261240    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.261245    6416 round_trippers.go:580]     Audit-Id: 765acc6a-86b3-4553-bf71-d8f337f95efb
	I0422 04:38:39.261530    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:39.451883    6416 request.go:629] Waited for 189.986329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.451924    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.451953    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.451959    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.451965    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.454648    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:39.454661    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.454667    6416 round_trippers.go:580]     Audit-Id: a6a1aacd-3eda-49fa-afb5-98bd022f1106
	I0422 04:38:39.454669    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.454672    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.454675    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.454677    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.454679    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.454760    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:39.758113    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:39.758135    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.758148    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.758155    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.760742    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:39.760753    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.760759    6416 round_trippers.go:580]     Audit-Id: 9862c951-baa4-44b7-99c5-a8f3a8360a7b
	I0422 04:38:39.760764    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.760768    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.760772    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.760775    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.760778    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:39 GMT
	I0422 04:38:39.761100    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:39.852040    6416 request.go:629] Waited for 90.577038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.852106    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:39.852111    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:39.852116    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:39.852120    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:39.853899    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:39.853911    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:39.853917    6416 round_trippers.go:580]     Audit-Id: d0c77592-75e5-49cf-b86c-87b520b47e64
	I0422 04:38:39.853920    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:39.853923    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:39.853925    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:39.853928    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:39.853930    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:39.854020    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:40.259034    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:40.270218    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.270235    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.270242    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.272273    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:40.272289    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.272296    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.272302    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.272305    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.272309    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.272313    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.272317    6416 round_trippers.go:580]     Audit-Id: 590c77ec-399e-4542-a9f2-783f7614451b
	I0422 04:38:40.272448    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:40.272820    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:40.272830    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.272839    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.272844    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.273901    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:40.273909    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.273914    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.273918    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.273939    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.273953    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.273957    6416 round_trippers.go:580]     Audit-Id: 18568548-b7ce-4bd7-bcb8-e0021a01484e
	I0422 04:38:40.273961    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.274061    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:40.274226    6416 pod_ready.go:102] pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:40.758220    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:40.758240    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.758252    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.758258    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.760745    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:40.760755    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.760762    6416 round_trippers.go:580]     Audit-Id: 5079e81f-f3db-468f-a0bb-a30d08006d12
	I0422 04:38:40.760768    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.760773    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.760776    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.760791    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.760795    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.760851    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:40.761203    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:40.761213    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:40.761221    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:40.761227    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:40.762807    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:40.762816    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:40.762821    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:40 GMT
	I0422 04:38:40.762824    6416 round_trippers.go:580]     Audit-Id: 29ccfd09-2837-4a1f-b8be-1d9ad18dad91
	I0422 04:38:40.762827    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:40.762829    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:40.762832    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:40.762835    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:40.762928    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:41.258379    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:41.258401    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.258414    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.258422    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.260958    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:41.260971    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.260978    6416 round_trippers.go:580]     Audit-Id: ddacdfed-30cd-44f2-a6c3-023c524e942c
	I0422 04:38:41.260982    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.260986    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.260990    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.260995    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.261001    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.261334    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:41.261715    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:41.261725    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.261733    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.261739    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.262917    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:41.262925    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.262930    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.262933    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.262936    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.262940    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.262944    6416 round_trippers.go:580]     Audit-Id: e29b3c7f-5920-4287-b8a1-e5b20ecd4f74
	I0422 04:38:41.262948    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.263092    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:41.759252    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:41.759279    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.759291    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.759298    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.761720    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:41.761734    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.761741    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.761747    6416 round_trippers.go:580]     Audit-Id: 598beba7-4166-4c7a-b232-70e75936f0b4
	I0422 04:38:41.761750    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.761754    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.761759    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.761764    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.762121    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:41.762485    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:41.762502    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:41.762510    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:41.762517    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:41.763864    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:41.763872    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:41.763880    6416 round_trippers.go:580]     Audit-Id: 9e38d6e0-ed46-4b2e-b08f-2c26d8fd6bd5
	I0422 04:38:41.763885    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:41.763890    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:41.763894    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:41.763898    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:41.763904    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:41 GMT
	I0422 04:38:41.764129    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:42.260086    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:42.260102    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.260110    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.260113    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.262028    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:42.262041    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.262048    6416 round_trippers.go:580]     Audit-Id: 772f3cd6-ea53-487f-a66d-6693912928fc
	I0422 04:38:42.262056    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.262063    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.262066    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.262069    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.262073    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.262368    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:42.262647    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:42.262655    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.262660    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.262664    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.264022    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:42.264030    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.264034    6416 round_trippers.go:580]     Audit-Id: c752dbb7-57ef-4ae3-9a54-0e4ac43d1187
	I0422 04:38:42.264037    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.264042    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.264044    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.264046    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.264049    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.264102    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:42.758403    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:42.758419    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.758425    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.758429    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.761807    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:42.761822    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.761828    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.761831    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.761836    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.761839    6416 round_trippers.go:580]     Audit-Id: dc3e7d4c-9653-4275-8fab-43d05fc4384e
	I0422 04:38:42.761841    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.761844    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.761899    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:42.762194    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:42.762201    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:42.762206    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:42.762209    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:42.764557    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:42.764568    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:42.764572    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:42.764576    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:42.764578    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:42.764581    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:42 GMT
	I0422 04:38:42.764584    6416 round_trippers.go:580]     Audit-Id: bf555109-b5da-43da-a16b-d5f37bfb7242
	I0422 04:38:42.764586    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:42.764647    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:42.764837    6416 pod_ready.go:102] pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:43.258020    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:43.258035    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.258042    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.258045    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.259661    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:43.259672    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.259677    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.259680    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.259682    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.259685    6416 round_trippers.go:580]     Audit-Id: 46a999b0-e90e-43ea-8ca0-faf384e56ad4
	I0422 04:38:43.259688    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.259689    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.259926    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:43.260233    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:43.260241    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.260247    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.260249    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.262390    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:43.262402    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.262409    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.262413    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.262417    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.262421    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.262423    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.262426    6416 round_trippers.go:580]     Audit-Id: f342952f-c3e6-4dfe-bcc5-cd1b10b7a535
	I0422 04:38:43.262621    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:43.758874    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:43.758902    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.758913    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.758921    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.761614    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:43.761632    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.761640    6416 round_trippers.go:580]     Audit-Id: 34c126a4-a5f7-44d8-bc52-867774f1460a
	I0422 04:38:43.761645    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.761649    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.761652    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.761679    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.761690    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.761860    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1200","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0422 04:38:43.762244    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:43.762255    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:43.762263    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:43.762266    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:43.763624    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:43.763631    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:43.763636    6416 round_trippers.go:580]     Audit-Id: af7aed87-b9d0-4a8d-ae3e-82eece9f0847
	I0422 04:38:43.763639    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:43.763642    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:43.763645    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:43.763648    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:43.763651    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:43 GMT
	I0422 04:38:43.763895    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:44.258677    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tnr9d
	I0422 04:38:44.258702    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.258714    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.258720    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.261510    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:44.261527    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.261534    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.261538    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.261541    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.261545    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.261548    6416 round_trippers.go:580]     Audit-Id: 315f5465-c97c-4898-ac14-90127538a842
	I0422 04:38:44.261552    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.261635    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1290","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0422 04:38:44.262002    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:44.262012    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.262018    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.262022    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.263490    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.263500    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.263505    6416 round_trippers.go:580]     Audit-Id: 95440de8-4325-4764-aebf-d1aad22719d4
	I0422 04:38:44.263523    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.263530    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.263533    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.263535    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.263538    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.263632    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:44.263804    6416 pod_ready.go:92] pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:44.263813    6416 pod_ready.go:81] duration metric: took 6.005909926s for pod "coredns-7db6d8ff4d-tnr9d" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:44.263819    6416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:44.263844    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:44.263849    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.263854    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.263857    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.265019    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.265027    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.265033    6416 round_trippers.go:580]     Audit-Id: 1dc5eb89-d434-4040-8fe9-a2472bcdeb29
	I0422 04:38:44.265036    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.265039    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.265043    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.265045    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.265049    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.265148    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:44.265366    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:44.265373    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.265383    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.265388    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.266503    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.266511    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.266517    6416 round_trippers.go:580]     Audit-Id: c0c8432f-04ce-407a-8ad5-55c2bc33b6b3
	I0422 04:38:44.266523    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.266529    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.266532    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.266536    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.266540    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.266708    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:44.764111    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:44.764138    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.764152    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.764158    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.766411    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:44.766420    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.766425    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.766428    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.766431    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.766434    6416 round_trippers.go:580]     Audit-Id: 45f79e37-2c5c-439f-98d1-a5341215bb6f
	I0422 04:38:44.766437    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.766440    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.766740    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:44.766987    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:44.766994    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:44.767000    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:44.767004    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:44.768050    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:44.768061    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:44.768071    6416 round_trippers.go:580]     Audit-Id: 99562bf7-89cc-4e0b-85a7-f43e8c3f42ed
	I0422 04:38:44.768075    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:44.768080    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:44.768082    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:44.768085    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:44.768088    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:44 GMT
	I0422 04:38:44.768274    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:45.266095    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:45.272136    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.272174    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.272180    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.274741    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:45.274753    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.274760    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.274764    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.274768    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.274771    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.274774    6416 round_trippers.go:580]     Audit-Id: 4759a14a-bb8d-469f-bf38-86c3351c1bf2
	I0422 04:38:45.274777    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.275204    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:45.275526    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:45.275535    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.275543    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.275547    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.276897    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:45.276906    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.276911    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.276914    6416 round_trippers.go:580]     Audit-Id: b6980c9b-3e0e-4096-8a6b-8ae6c00ea8b1
	I0422 04:38:45.276917    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.276920    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.276923    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.276925    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.277007    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:45.766001    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:45.766052    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.766065    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.766073    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.769114    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:45.769134    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.769144    6416 round_trippers.go:580]     Audit-Id: 1530a2ba-ff19-463f-b50b-64b1174e18b0
	I0422 04:38:45.769155    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.769160    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.769166    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.769170    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.769174    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.769415    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:45.769660    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:45.769667    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:45.769672    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:45.769677    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:45.770995    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:45.771003    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:45.771008    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:45.771011    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:45.771015    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:45 GMT
	I0422 04:38:45.771018    6416 round_trippers.go:580]     Audit-Id: ce8f0b43-9473-42d5-b65e-f6e290914c57
	I0422 04:38:45.771021    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:45.771023    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:45.771459    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:46.263959    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:46.263985    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.264013    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.264026    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.266366    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:46.266378    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.266385    6416 round_trippers.go:580]     Audit-Id: 911cd35e-4164-4039-8864-e21336dc297a
	I0422 04:38:46.266389    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.266393    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.266397    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.266400    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.266404    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.266561    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:46.266892    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:46.266901    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.266908    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.266914    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.268299    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:46.268312    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.268317    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.268321    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.268325    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.268328    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.268331    6416 round_trippers.go:580]     Audit-Id: 799fe622-6b95-4208-8f8c-b97ef24f4456
	I0422 04:38:46.268334    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.268455    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:46.268628    6416 pod_ready.go:102] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:46.763985    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:46.764024    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.764048    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.764053    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.765837    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:46.765847    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.765852    6416 round_trippers.go:580]     Audit-Id: c9d53c64-f445-4d8b-9792-713bdfd49228
	I0422 04:38:46.765856    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.765858    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.765887    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.765894    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.765897    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.766004    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:46.766242    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:46.766249    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:46.766254    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:46.766258    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:46.768012    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:46.768019    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:46.768023    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:46.768027    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:46 GMT
	I0422 04:38:46.768029    6416 round_trippers.go:580]     Audit-Id: 0f422866-aa4c-4709-8b9a-f3c310fa0a14
	I0422 04:38:46.768032    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:46.768036    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:46.768040    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:46.768177    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:47.264105    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:47.264130    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.264141    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.264149    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.268211    6416 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 04:38:47.268223    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.268245    6416 round_trippers.go:580]     Audit-Id: ebe49c5c-07a8-4011-a6fe-5767063aa5b2
	I0422 04:38:47.268252    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.268256    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.268260    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.268263    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.268267    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.268341    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:47.268603    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:47.268610    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.268616    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.268620    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.270919    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:47.270928    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.270933    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.270936    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.270939    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.270941    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.270944    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.270947    6416 round_trippers.go:580]     Audit-Id: 853629e4-7a64-4d9d-8289-e1ede9c3c21d
	I0422 04:38:47.271046    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:47.764982    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:47.765001    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.765035    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.765041    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.767482    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:47.767493    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.767517    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.767526    6416 round_trippers.go:580]     Audit-Id: 75be00ec-45af-4003-a457-d0dbfbdb0fa0
	I0422 04:38:47.767529    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.767538    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.767542    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.767544    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.767712    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:47.768051    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:47.768058    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:47.768064    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:47.768067    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:47.769281    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:47.769289    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:47.769294    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:47.769298    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:47.769301    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:47 GMT
	I0422 04:38:47.769303    6416 round_trippers.go:580]     Audit-Id: 04b9a1cf-b605-4cb6-be5f-fe419aa474ad
	I0422 04:38:47.769305    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:47.769308    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:47.769380    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:48.265231    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:48.265247    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.265252    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.265257    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.267289    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:48.267297    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.267302    6416 round_trippers.go:580]     Audit-Id: 9ce00ecc-8b0d-4902-8c44-c54b6f296c86
	I0422 04:38:48.267305    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.267308    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.267323    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.267329    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.267331    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.267487    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:48.267828    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:48.267835    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.267841    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.267845    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.269220    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:48.269231    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.269238    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.269244    6416 round_trippers.go:580]     Audit-Id: 3128a881-e0ef-4652-9b51-c8c7010317f0
	I0422 04:38:48.269252    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.269261    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.269265    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.269281    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.269431    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:48.269605    6416 pod_ready.go:102] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:48.765761    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:48.765786    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.765822    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.765831    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.768239    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:48.768252    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.768259    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.768265    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.768270    6416 round_trippers.go:580]     Audit-Id: d1fbe98a-cc34-4c5b-a7f8-aa5d9b8c8d38
	I0422 04:38:48.768273    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.768282    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.768289    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.768483    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:48.768804    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:48.768814    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:48.768821    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:48.768833    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:48.770079    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:48.770089    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:48.770097    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:48 GMT
	I0422 04:38:48.770101    6416 round_trippers.go:580]     Audit-Id: b8a8eca9-846e-4ebc-81ca-890df5377df7
	I0422 04:38:48.770105    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:48.770121    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:48.770131    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:48.770149    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:48.770273    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:49.264852    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:49.264878    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.264889    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.264898    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.267357    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:49.267372    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.267382    6416 round_trippers.go:580]     Audit-Id: 6a10a56f-e90f-4752-b78a-198e4fbd3395
	I0422 04:38:49.267389    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.267393    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.267400    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.267405    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.267409    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.267682    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:49.268032    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:49.268042    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.268049    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.268054    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.269410    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:49.269418    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.269423    6416 round_trippers.go:580]     Audit-Id: c129c66a-52ad-4856-83ae-981d1fcb4394
	I0422 04:38:49.269426    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.269428    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.269431    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.269433    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.269436    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.269544    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:49.764744    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:49.764802    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.764816    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.764823    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.767638    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:49.767654    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.767661    6416 round_trippers.go:580]     Audit-Id: 3a9bc166-1062-4cc7-b46d-7a9d608607a6
	I0422 04:38:49.767666    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.767669    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.767694    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.767701    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.767706    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.768032    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:49.768379    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:49.768389    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:49.768397    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:49.768403    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:49.769897    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:49.769917    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:49.769927    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:49.769933    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:49.769939    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:49 GMT
	I0422 04:38:49.769942    6416 round_trippers.go:580]     Audit-Id: e21ae3b6-2625-46d9-813c-8fe2a01c647a
	I0422 04:38:49.769945    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:49.769947    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:49.770250    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.264456    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:50.270447    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.270464    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.270471    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.273384    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:50.273400    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.273407    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.273411    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.273414    6416 round_trippers.go:580]     Audit-Id: b4536eb8-0f8d-4d66-a15e-a19d7e686a19
	I0422 04:38:50.273418    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.273422    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.273443    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.273573    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1195","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0422 04:38:50.273911    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.273921    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.273928    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.273932    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.275273    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.275282    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.275286    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.275289    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.275292    6416 round_trippers.go:580]     Audit-Id: 17928738-05f1-4b0b-b8d9-29acec3403fa
	I0422 04:38:50.275295    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.275299    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.275301    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.275368    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.275552    6416 pod_ready.go:102] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"False"
	I0422 04:38:50.765139    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-449000
	I0422 04:38:50.765154    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.765160    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.765163    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.766756    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.766770    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.766776    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.766779    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.766783    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.766786    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.766789    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.766792    6416 round_trippers.go:580]     Audit-Id: fbb2d05b-1b04-463c-89d8-0da3fdea8fd9
	I0422 04:38:50.767001    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-449000","namespace":"kube-system","uid":"ff3afd40-3400-4293-9fe4-03d22b8aba13","resourceVersion":"1303","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.mirror":"e1b3c869a7cf9eae6c53efe6a7b8f0ed","kubernetes.io/config.seen":"2024-04-22T11:29:12.576359804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0422 04:38:50.767295    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.767303    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.767309    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.767313    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.768512    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.768521    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.768526    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.768529    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.768532    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.768536    6416 round_trippers.go:580]     Audit-Id: 2a9bce2b-b2a3-4ef6-8be6-ecc6f0afb22b
	I0422 04:38:50.768539    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.768542    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.768624    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.768811    6416 pod_ready.go:92] pod "etcd-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.768819    6416 pod_ready.go:81] duration metric: took 6.504960902s for pod "etcd-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.768829    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.768870    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-449000
	I0422 04:38:50.768876    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.768881    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.768885    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.770033    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.770066    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.770073    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.770077    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.770082    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.770087    6416 round_trippers.go:580]     Audit-Id: 1ba6bc17-ac93-466e-b4c2-76c657606f1c
	I0422 04:38:50.770090    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.770095    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.770336    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-449000","namespace":"kube-system","uid":"cc0086bd-2049-4d09-a498-d26cc78b6968","resourceVersion":"1279","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.mirror":"c67459cca8bc290b8ebe6f499cbd5c4c","kubernetes.io/config.seen":"2024-04-22T11:29:12.576362787Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0422 04:38:50.770663    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.770669    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.770674    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.770679    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.772449    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.772459    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.772466    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.772480    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.772485    6416 round_trippers.go:580]     Audit-Id: c9fe64eb-5eb8-4273-b5a7-3e12fd8fa9c1
	I0422 04:38:50.772487    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.772490    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.772493    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.772578    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.772735    6416 pod_ready.go:92] pod "kube-apiserver-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.772743    6416 pod_ready.go:81] duration metric: took 3.907787ms for pod "kube-apiserver-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.772748    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.772781    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-449000
	I0422 04:38:50.772786    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.772791    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.772795    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.774160    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.774169    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.774175    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.774178    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.774180    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.774182    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.774186    6416 round_trippers.go:580]     Audit-Id: 8df13ed3-5f76-4a6d-9964-b92ff2b0ce04
	I0422 04:38:50.774189    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.774293    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-449000","namespace":"kube-system","uid":"7d730ce3-3f6c-4cc8-aff2-bbcf584056c7","resourceVersion":"1281","creationTimestamp":"2024-04-22T11:29:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.mirror":"1e27c5a6b5c9863a987f013692b0cafa","kubernetes.io/config.seen":"2024-04-22T11:29:12.576363612Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0422 04:38:50.774517    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.774524    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.774530    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.774534    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.775402    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:50.775408    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.775411    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.775421    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.775427    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.775433    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.775438    6416 round_trippers.go:580]     Audit-Id: a3118fc0-6324-4f73-a6cc-2197d9c958e5
	I0422 04:38:50.775443    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.775545    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.775707    6416 pod_ready.go:92] pod "kube-controller-manager-multinode-449000" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.775714    6416 pod_ready.go:81] duration metric: took 2.960309ms for pod "kube-controller-manager-multinode-449000" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.775719    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.775743    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q52c
	I0422 04:38:50.775747    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.775752    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.775756    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.776718    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:50.776724    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.776729    6416 round_trippers.go:580]     Audit-Id: 43b58cce-ba9f-4610-96fa-682e917b17e9
	I0422 04:38:50.776733    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.776739    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.776742    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.776746    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.776758    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.776882    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4q52c","generateName":"kube-proxy-","namespace":"kube-system","uid":"764856b1-b523-4b58-8a33-6b81ab928c79","resourceVersion":"1162","creationTimestamp":"2024-04-22T11:32:35Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:50.777094    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m03
	I0422 04:38:50.777101    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.777106    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.777109    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.778000    6416 round_trippers.go:574] Response Status: 404 Not Found in 0 milliseconds
	I0422 04:38:50.778007    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.778012    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.778028    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.778037    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.778041    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.778045    6416 round_trippers.go:580]     Content-Length: 210
	I0422 04:38:50.778051    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.778055    6416 round_trippers.go:580]     Audit-Id: 3c8e6efd-7787-488b-b4ed-39312495da3b
	I0422 04:38:50.778072    6416 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-449000-m03\" not found","reason":"NotFound","details":{"name":"multinode-449000-m03","kind":"nodes"},"code":404}
	I0422 04:38:50.778117    6416 pod_ready.go:97] node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:50.778125    6416 pod_ready.go:81] duration metric: took 2.400385ms for pod "kube-proxy-4q52c" in "kube-system" namespace to be "Ready" ...
	E0422 04:38:50.778130    6416 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-449000-m03" hosting pod "kube-proxy-4q52c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-449000-m03": nodes "multinode-449000-m03" not found
	I0422 04:38:50.778135    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.778169    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jrtv2
	I0422 04:38:50.778174    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.778179    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.778182    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.779084    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:50.779092    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.779099    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.779104    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.779109    6416 round_trippers.go:580]     Audit-Id: 8d2a0118-805c-4b91-bc4e-d9ca1837220e
	I0422 04:38:50.779115    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.779121    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.779132    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.779238    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jrtv2","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6078b93-4180-484d-b486-9ddf193ba84e","resourceVersion":"1210","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0422 04:38:50.779463    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000
	I0422 04:38:50.779470    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.779475    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.779479    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.780533    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.780538    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.780543    6416 round_trippers.go:580]     Audit-Id: 29ea273b-3d02-4f60-9358-61077e4e1c4c
	I0422 04:38:50.780545    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.780566    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.780570    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.780573    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.780576    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:50 GMT
	I0422 04:38:50.780765    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-22T11:29:10Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0422 04:38:50.780925    6416 pod_ready.go:92] pod "kube-proxy-jrtv2" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:50.780932    6416 pod_ready.go:81] duration metric: took 2.791209ms for pod "kube-proxy-jrtv2" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.780937    6416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:50.966555    6416 request.go:629] Waited for 185.589322ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:50.966621    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lx9ft
	I0422 04:38:50.966626    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:50.966632    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:50.966636    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:50.968144    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:50.968153    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:50.968158    6416 round_trippers.go:580]     Audit-Id: 65f908c6-4282-4d43-a000-679bd0f86f8f
	I0422 04:38:50.968161    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:50.968164    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:50.968166    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:50.968181    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:50.968187    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:50.968350    6416 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lx9ft","generateName":"kube-proxy-","namespace":"kube-system","uid":"38104bb7-7d9e-4377-9912-06cb23591941","resourceVersion":"1031","creationTimestamp":"2024-04-22T11:31:54Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"79038979-7361-438e-afbc-d9bb2ecb3501","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:31:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79038979-7361-438e-afbc-d9bb2ecb3501\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0422 04:38:51.166553    6416 request.go:629] Waited for 197.931887ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:51.166609    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-449000-m02
	I0422 04:38:51.166616    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.166628    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.166631    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.168178    6416 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 04:38:51.168187    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.168192    6416 round_trippers.go:580]     Audit-Id: 83d73d71-3a36-41cb-96b8-87e83ab6c9fa
	I0422 04:38:51.168195    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.168198    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.168202    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.168205    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.168207    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.168266    6416 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-449000-m02","uid":"cf524355-0b8a-4495-8a18-e4d0f38226d6","resourceVersion":"1048","creationTimestamp":"2024-04-22T11:36:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_22T04_36_49_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:36:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0422 04:38:51.168440    6416 pod_ready.go:92] pod "kube-proxy-lx9ft" in "kube-system" namespace has status "Ready":"True"
	I0422 04:38:51.168449    6416 pod_ready.go:81] duration metric: took 387.505107ms for pod "kube-proxy-lx9ft" in "kube-system" namespace to be "Ready" ...
	I0422 04:38:51.168456    6416 pod_ready.go:38] duration metric: took 13.034672669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 04:38:51.168472    6416 api_server.go:52] waiting for apiserver process to appear ...
	I0422 04:38:51.168526    6416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:38:51.184969    6416 command_runner.go:130] > 1523
	I0422 04:38:51.185682    6416 api_server.go:72] duration metric: took 13.321263077s to wait for apiserver process to appear ...
	I0422 04:38:51.185694    6416 api_server.go:88] waiting for apiserver healthz status ...
	I0422 04:38:51.185708    6416 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:38:51.190236    6416 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0422 04:38:51.190268    6416 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0422 04:38:51.190272    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.190279    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.190284    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.190932    6416 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 04:38:51.190941    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.190948    6416 round_trippers.go:580]     Content-Length: 263
	I0422 04:38:51.190952    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.190954    6416 round_trippers.go:580]     Audit-Id: 3295cd18-cbae-4fa3-95bd-2fbd1071fba3
	I0422 04:38:51.190957    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.190960    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.190962    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.190965    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.191004    6416 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0422 04:38:51.191031    6416 api_server.go:141] control plane version: v1.30.0
	I0422 04:38:51.191042    6416 api_server.go:131] duration metric: took 5.341125ms to wait for apiserver health ...
	I0422 04:38:51.191048    6416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 04:38:51.367153    6416 request.go:629] Waited for 176.072086ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.367207    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.367212    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.367218    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.367222    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.370493    6416 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 04:38:51.370502    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.370507    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.370510    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.370513    6416 round_trippers.go:580]     Audit-Id: e6265c70-e04e-488a-b481-9e0d923b91a4
	I0422 04:38:51.370516    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.370521    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.370525    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.371752    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1290","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86242 chars]
	I0422 04:38:51.373654    6416 system_pods.go:59] 12 kube-system pods found
	I0422 04:38:51.373664    6416 system_pods.go:61] "coredns-7db6d8ff4d-tnr9d" [20633bf5-f995-44a1-b778-441b906496cd] Running
	I0422 04:38:51.373668    6416 system_pods.go:61] "etcd-multinode-449000" [ff3afd40-3400-4293-9fe4-03d22b8aba13] Running
	I0422 04:38:51.373671    6416 system_pods.go:61] "kindnet-jkzvq" [1c07681b-b4af-41b9-917c-01183dcd9e7f] Running
	I0422 04:38:51.373674    6416 system_pods.go:61] "kindnet-pbqsb" [f1537c83-ca18-43b9-8fc5-91de97ef1d76] Running
	I0422 04:38:51.373676    6416 system_pods.go:61] "kindnet-sm2l6" [9c708c64-7f5e-4502-9381-d97e024ea343] Running
	I0422 04:38:51.373679    6416 system_pods.go:61] "kube-apiserver-multinode-449000" [cc0086bd-2049-4d09-a498-d26cc78b6968] Running
	I0422 04:38:51.373683    6416 system_pods.go:61] "kube-controller-manager-multinode-449000" [7d730ce3-3f6c-4cc8-aff2-bbcf584056c7] Running
	I0422 04:38:51.373686    6416 system_pods.go:61] "kube-proxy-4q52c" [764856b1-b523-4b58-8a33-6b81ab928c79] Running
	I0422 04:38:51.373689    6416 system_pods.go:61] "kube-proxy-jrtv2" [e6078b93-4180-484d-b486-9ddf193ba84e] Running
	I0422 04:38:51.373692    6416 system_pods.go:61] "kube-proxy-lx9ft" [38104bb7-7d9e-4377-9912-06cb23591941] Running
	I0422 04:38:51.373696    6416 system_pods.go:61] "kube-scheduler-multinode-449000" [227c4576-009e-4a6c-8bc8-a3e9d9e62aae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 04:38:51.373700    6416 system_pods.go:61] "storage-provisioner" [f286f444-3ade-4e54-85bb-8577f0234cca] Running
	I0422 04:38:51.373716    6416 system_pods.go:74] duration metric: took 182.661024ms to wait for pod list to return data ...
	I0422 04:38:51.373724    6416 default_sa.go:34] waiting for default service account to be created ...
	I0422 04:38:51.567155    6416 request.go:629] Waited for 193.384955ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0422 04:38:51.567202    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0422 04:38:51.567207    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.567214    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.567218    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.573022    6416 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 04:38:51.573035    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.573050    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.573055    6416 round_trippers.go:580]     Audit-Id: cf6e9481-a0d3-4de5-b256-26c4c8e666f4
	I0422 04:38:51.573060    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.573064    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.573073    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.573076    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.573079    6416 round_trippers.go:580]     Content-Length: 262
	I0422 04:38:51.573090    6416 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"644e2bca-08d9-4fd2-bd78-af290bc8acca","resourceVersion":"355","creationTimestamp":"2024-04-22T11:29:27Z"}}]}
	I0422 04:38:51.573208    6416 default_sa.go:45] found service account: "default"
	I0422 04:38:51.573218    6416 default_sa.go:55] duration metric: took 199.488037ms for default service account to be created ...
	I0422 04:38:51.573226    6416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 04:38:51.767187    6416 request.go:629] Waited for 193.91906ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.767260    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0422 04:38:51.767270    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.767280    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.767291    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.771453    6416 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 04:38:51.771476    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.771483    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:51 GMT
	I0422 04:38:51.771486    6416 round_trippers.go:580]     Audit-Id: e5e2291c-d4c6-4259-83a4-be723c83db8f
	I0422 04:38:51.771500    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.771503    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.771506    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.771520    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.772043    6416 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tnr9d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"20633bf5-f995-44a1-b778-441b906496cd","resourceVersion":"1290","creationTimestamp":"2024-04-22T11:29:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-22T11:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ad6736-8c1a-4a6b-9bf7-ac5c4e732a91\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86242 chars]
	I0422 04:38:51.773953    6416 system_pods.go:86] 12 kube-system pods found
	I0422 04:38:51.773965    6416 system_pods.go:89] "coredns-7db6d8ff4d-tnr9d" [20633bf5-f995-44a1-b778-441b906496cd] Running
	I0422 04:38:51.773969    6416 system_pods.go:89] "etcd-multinode-449000" [ff3afd40-3400-4293-9fe4-03d22b8aba13] Running
	I0422 04:38:51.773974    6416 system_pods.go:89] "kindnet-jkzvq" [1c07681b-b4af-41b9-917c-01183dcd9e7f] Running
	I0422 04:38:51.773977    6416 system_pods.go:89] "kindnet-pbqsb" [f1537c83-ca18-43b9-8fc5-91de97ef1d76] Running
	I0422 04:38:51.773980    6416 system_pods.go:89] "kindnet-sm2l6" [9c708c64-7f5e-4502-9381-d97e024ea343] Running
	I0422 04:38:51.773984    6416 system_pods.go:89] "kube-apiserver-multinode-449000" [cc0086bd-2049-4d09-a498-d26cc78b6968] Running
	I0422 04:38:51.773988    6416 system_pods.go:89] "kube-controller-manager-multinode-449000" [7d730ce3-3f6c-4cc8-aff2-bbcf584056c7] Running
	I0422 04:38:51.773991    6416 system_pods.go:89] "kube-proxy-4q52c" [764856b1-b523-4b58-8a33-6b81ab928c79] Running
	I0422 04:38:51.773994    6416 system_pods.go:89] "kube-proxy-jrtv2" [e6078b93-4180-484d-b486-9ddf193ba84e] Running
	I0422 04:38:51.773998    6416 system_pods.go:89] "kube-proxy-lx9ft" [38104bb7-7d9e-4377-9912-06cb23591941] Running
	I0422 04:38:51.774005    6416 system_pods.go:89] "kube-scheduler-multinode-449000" [227c4576-009e-4a6c-8bc8-a3e9d9e62aae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 04:38:51.774012    6416 system_pods.go:89] "storage-provisioner" [f286f444-3ade-4e54-85bb-8577f0234cca] Running
	I0422 04:38:51.774018    6416 system_pods.go:126] duration metric: took 200.786794ms to wait for k8s-apps to be running ...
	I0422 04:38:51.774026    6416 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 04:38:51.774081    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 04:38:51.786697    6416 system_svc.go:56] duration metric: took 12.665074ms WaitForService to wait for kubelet
	I0422 04:38:51.786712    6416 kubeadm.go:576] duration metric: took 13.922291142s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 04:38:51.786728    6416 node_conditions.go:102] verifying NodePressure condition ...
	I0422 04:38:51.967301    6416 request.go:629] Waited for 180.495069ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0422 04:38:51.967416    6416 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0422 04:38:51.967429    6416 round_trippers.go:469] Request Headers:
	I0422 04:38:51.967440    6416 round_trippers.go:473]     Accept: application/json, */*
	I0422 04:38:51.967446    6416 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0422 04:38:51.969959    6416 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 04:38:51.969974    6416 round_trippers.go:577] Response Headers:
	I0422 04:38:51.969981    6416 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfad8265-8988-44b1-9bf7-adcfd5806a57
	I0422 04:38:51.969986    6416 round_trippers.go:580]     Date: Mon, 22 Apr 2024 11:38:52 GMT
	I0422 04:38:51.969990    6416 round_trippers.go:580]     Audit-Id: d75795fc-adb0-41c3-bca7-51415a4e6406
	I0422 04:38:51.970015    6416 round_trippers.go:580]     Cache-Control: no-cache, private
	I0422 04:38:51.970025    6416 round_trippers.go:580]     Content-Type: application/json
	I0422 04:38:51.970030    6416 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1bd1c201-d8ec-4879-b796-c410f4ec058a
	I0422 04:38:51.970317    6416 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1308"},"items":[{"metadata":{"name":"multinode-449000","uid":"4cc49b82-fcfa-4851-8f66-707c17e0a66d","resourceVersion":"1212","creationTimestamp":"2024-04-22T11:29:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-449000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3838931194b4975fce64faf7ca14560885944437","minikube.k8s.io/name":"multinode-449000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_22T04_29_13_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0422 04:38:51.970734    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:51.970747    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:51.970754    6416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 04:38:51.970758    6416 node_conditions.go:123] node cpu capacity is 2
	I0422 04:38:51.970763    6416 node_conditions.go:105] duration metric: took 184.029883ms to run NodePressure ...
	I0422 04:38:51.970774    6416 start.go:240] waiting for startup goroutines ...
	I0422 04:38:51.970787    6416 start.go:245] waiting for cluster config update ...
	I0422 04:38:51.970796    6416 start.go:254] writing updated cluster config ...
	I0422 04:38:51.994473    6416 out.go:177] 
	I0422 04:38:52.014790    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:52.014945    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:52.037450    6416 out.go:177] * Starting "multinode-449000-m02" worker node in "multinode-449000" cluster
	I0422 04:38:52.080255    6416 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 04:38:52.080295    6416 cache.go:56] Caching tarball of preloaded images
	I0422 04:38:52.080475    6416 preload.go:173] Found /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0422 04:38:52.080495    6416 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 04:38:52.080626    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:52.081591    6416 start.go:360] acquireMachinesLock for multinode-449000-m02: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 04:38:52.081700    6416 start.go:364] duration metric: took 82.942µs to acquireMachinesLock for "multinode-449000-m02"
	I0422 04:38:52.081726    6416 start.go:96] Skipping create...Using existing machine configuration
	I0422 04:38:52.081733    6416 fix.go:54] fixHost starting: m02
	I0422 04:38:52.082198    6416 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:52.082217    6416 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:52.091648    6416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52234
	I0422 04:38:52.092000    6416 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:52.092340    6416 main.go:141] libmachine: Using API Version  1
	I0422 04:38:52.092358    6416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:52.092554    6416 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:52.092650    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:38:52.092744    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I0422 04:38:52.092825    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:52.092888    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 6310
	I0422 04:38:52.093843    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid 6310 missing from process table
	I0422 04:38:52.093863    6416 fix.go:112] recreateIfNeeded on multinode-449000-m02: state=Stopped err=<nil>
	I0422 04:38:52.093874    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	W0422 04:38:52.093958    6416 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 04:38:52.117231    6416 out.go:177] * Restarting existing hyperkit VM for "multinode-449000-m02" ...
	I0422 04:38:52.158172    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .Start
	I0422 04:38:52.158386    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:52.158410    6416 main.go:141] libmachine: (multinode-449000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid
	I0422 04:38:52.159725    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid 6310 missing from process table
	I0422 04:38:52.159746    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | pid 6310 is in state "Stopped"
	I0422 04:38:52.159764    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid...
	I0422 04:38:52.160132    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Using UUID 6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0
	I0422 04:38:52.186324    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Generated MAC e2:d0:5:63:30:40
	I0422 04:38:52.186345    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000
	I0422 04:38:52.186507    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c3200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0422 04:38:52.186538    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c3200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0422 04:38:52.186610    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/multinode-449000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage,/Users/j
enkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"}
	I0422 04:38:52.186653    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6bb7a425-e2c0-4ba2-b75b-6222ca7aafe0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/multinode-449000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/mult
inode-449000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-449000"
	I0422 04:38:52.186674    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0422 04:38:52.188024    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 DEBUG: hyperkit: Pid is 6455
	I0422 04:38:52.188486    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Attempt 0
	I0422 04:38:52.188514    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:52.188579    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 6455
	I0422 04:38:52.190238    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Searching for e2:d0:5:63:30:40 in /var/db/dhcpd_leases ...
	I0422 04:38:52.190306    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0422 04:38:52.190332    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279dab}
	I0422 04:38:52.190354    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 04:38:52.190368    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279d43}
	I0422 04:38:52.190382    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | Found match: e2:d0:5:63:30:40
	I0422 04:38:52.190396    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | IP: 192.169.0.17
	I0422 04:38:52.190433    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetConfigRaw
	I0422 04:38:52.191085    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I0422 04:38:52.191263    6416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/multinode-449000/config.json ...
	I0422 04:38:52.191782    6416 machine.go:94] provisionDockerMachine start ...
	I0422 04:38:52.191793    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:38:52.191941    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:38:52.192043    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:38:52.192142    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:38:52.192235    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:38:52.192333    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:38:52.192465    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:38:52.192647    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:38:52.192656    6416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 04:38:52.195735    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0422 04:38:52.204103    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0422 04:38:52.205110    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:52.205126    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:52.205136    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:52.205147    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:52.585184    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0422 04:38:52.585203    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0422 04:38:52.699814    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:38:52.699834    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:38:52.699864    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:38:52.699884    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:38:52.700761    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0422 04:38:52.700781    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0422 04:38:57.992005    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0422 04:38:57.992071    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0422 04:38:57.992086    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0422 04:38:58.016646    6416 main.go:141] libmachine: (multinode-449000-m02) DBG | 2024/04/22 04:38:58 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0422 04:39:27.258967    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 04:39:27.258982    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I0422 04:39:27.259114    6416 buildroot.go:166] provisioning hostname "multinode-449000-m02"
	I0422 04:39:27.259125    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I0422 04:39:27.259217    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.259312    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.259405    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.259487    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.259577    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.259704    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.259866    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.259875    6416 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-449000-m02 && echo "multinode-449000-m02" | sudo tee /etc/hostname
	I0422 04:39:27.331893    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-449000-m02
	
	I0422 04:39:27.331913    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.332049    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.332142    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.332243    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.332354    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.332492    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.332640    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.332651    6416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-449000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-449000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-449000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 04:39:27.400238    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 04:39:27.400266    6416 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18711-1033/.minikube CaCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18711-1033/.minikube}
	I0422 04:39:27.400277    6416 buildroot.go:174] setting up certificates
	I0422 04:39:27.400284    6416 provision.go:84] configureAuth start
	I0422 04:39:27.400291    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetMachineName
	I0422 04:39:27.400426    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I0422 04:39:27.400516    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.400607    6416 provision.go:143] copyHostCerts
	I0422 04:39:27.400634    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:39:27.400695    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem, removing ...
	I0422 04:39:27.400701    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:39:27.400845    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem (1082 bytes)
	I0422 04:39:27.401042    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:39:27.401082    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem, removing ...
	I0422 04:39:27.401088    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:39:27.401177    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem (1123 bytes)
	I0422 04:39:27.401337    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:39:27.401378    6416 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem, removing ...
	I0422 04:39:27.401383    6416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:39:27.401458    6416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem (1675 bytes)
	I0422 04:39:27.401605    6416 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem org=jenkins.multinode-449000-m02 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-449000-m02]
	I0422 04:39:27.550203    6416 provision.go:177] copyRemoteCerts
	I0422 04:39:27.550254    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 04:39:27.550268    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.550408    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.550500    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.550577    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.550655    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:27.590164    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 04:39:27.590247    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0422 04:39:27.609334    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 04:39:27.609408    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0422 04:39:27.628163    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 04:39:27.628229    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 04:39:27.647070    6416 provision.go:87] duration metric: took 246.777365ms to configureAuth
	I0422 04:39:27.647083    6416 buildroot.go:189] setting minikube options for container-runtime
	I0422 04:39:27.647258    6416 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:39:27.647276    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:27.647405    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.647487    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.647568    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.647634    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.647722    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.647831    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.647951    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.647958    6416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 04:39:27.711230    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0422 04:39:27.711244    6416 buildroot.go:70] root file system type: tmpfs
	I0422 04:39:27.711329    6416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 04:39:27.711348    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.711481    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.711569    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.711657    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.711760    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.711905    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.712045    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.712090    6416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 04:39:27.784685    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 04:39:27.784709    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:27.784846    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:27.784942    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.785023    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:27.785119    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:27.785252    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:27.785395    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:27.785413    6416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 04:39:29.324027    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0422 04:39:29.324042    6416 machine.go:97] duration metric: took 37.132053891s to provisionDockerMachine
	I0422 04:39:29.324050    6416 start.go:293] postStartSetup for "multinode-449000-m02" (driver="hyperkit")
	I0422 04:39:29.324061    6416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 04:39:29.324071    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.324246    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 04:39:29.324268    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:29.324354    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.324449    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.324543    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.324621    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:29.362161    6416 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 04:39:29.365050    6416 command_runner.go:130] > NAME=Buildroot
	I0422 04:39:29.365059    6416 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 04:39:29.365063    6416 command_runner.go:130] > ID=buildroot
	I0422 04:39:29.365083    6416 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 04:39:29.365091    6416 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 04:39:29.365170    6416 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 04:39:29.365179    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/addons for local assets ...
	I0422 04:39:29.365281    6416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/files for local assets ...
	I0422 04:39:29.365469    6416 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> 14842.pem in /etc/ssl/certs
	I0422 04:39:29.365475    6416 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> /etc/ssl/certs/14842.pem
	I0422 04:39:29.365676    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 04:39:29.373469    6416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /etc/ssl/certs/14842.pem (1708 bytes)
	I0422 04:39:29.392405    6416 start.go:296] duration metric: took 68.34327ms for postStartSetup
	I0422 04:39:29.392424    6416 fix.go:56] duration metric: took 37.310491855s for fixHost
	I0422 04:39:29.392439    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:29.392575    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.392660    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.392755    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.392849    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.392958    6416 main.go:141] libmachine: Using SSH client type: native
	I0422 04:39:29.393097    6416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69b5b80] 0x69b88e0 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0422 04:39:29.393104    6416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 04:39:29.454814    6416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713785969.627304722
	
	I0422 04:39:29.454826    6416 fix.go:216] guest clock: 1713785969.627304722
	I0422 04:39:29.454831    6416 fix.go:229] Guest: 2024-04-22 04:39:29.627304722 -0700 PDT Remote: 2024-04-22 04:39:29.39243 -0700 PDT m=+79.186243193 (delta=234.874722ms)
	I0422 04:39:29.454843    6416 fix.go:200] guest clock delta is within tolerance: 234.874722ms
	I0422 04:39:29.454848    6416 start.go:83] releasing machines lock for "multinode-449000-m02", held for 37.372937032s
	I0422 04:39:29.454865    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.454999    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I0422 04:39:29.478473    6416 out.go:177] * Found network options:
	I0422 04:39:29.499392    6416 out.go:177]   - NO_PROXY=192.169.0.16
	W0422 04:39:29.520295    6416 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 04:39:29.520322    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.520866    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.520998    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:39:29.521071    6416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 04:39:29.521104    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	W0422 04:39:29.521164    6416 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 04:39:29.521227    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.521238    6416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 04:39:29.521268    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:39:29.521394    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.521416    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:39:29.521525    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:39:29.521569    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.521689    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:29.521707    6416 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:39:29.521838    6416 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:39:29.556257    6416 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 04:39:29.556409    6416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 04:39:29.556470    6416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 04:39:29.604422    6416 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 04:39:29.604897    6416 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0422 04:39:29.604914    6416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 04:39:29.604921    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:39:29.604992    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:39:29.620264    6416 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0422 04:39:29.620481    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0422 04:39:29.629616    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 04:39:29.638708    6416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 04:39:29.638752    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 04:39:29.647676    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:39:29.656675    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 04:39:29.665598    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:39:29.674573    6416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 04:39:29.683829    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 04:39:29.692872    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 04:39:29.702132    6416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 04:39:29.711303    6416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 04:39:29.719749    6416 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 04:39:29.719901    6416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 04:39:29.728145    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:39:29.834420    6416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 04:39:29.852642    6416 start.go:494] detecting cgroup driver to use...
	I0422 04:39:29.852725    6416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 04:39:29.870613    6416 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0422 04:39:29.871052    6416 command_runner.go:130] > [Unit]
	I0422 04:39:29.871060    6416 command_runner.go:130] > Description=Docker Application Container Engine
	I0422 04:39:29.871064    6416 command_runner.go:130] > Documentation=https://docs.docker.com
	I0422 04:39:29.871070    6416 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0422 04:39:29.871074    6416 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0422 04:39:29.871082    6416 command_runner.go:130] > StartLimitBurst=3
	I0422 04:39:29.871086    6416 command_runner.go:130] > StartLimitIntervalSec=60
	I0422 04:39:29.871090    6416 command_runner.go:130] > [Service]
	I0422 04:39:29.871093    6416 command_runner.go:130] > Type=notify
	I0422 04:39:29.871096    6416 command_runner.go:130] > Restart=on-failure
	I0422 04:39:29.871101    6416 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0422 04:39:29.871106    6416 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0422 04:39:29.871116    6416 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0422 04:39:29.871122    6416 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0422 04:39:29.871128    6416 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0422 04:39:29.871133    6416 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0422 04:39:29.871138    6416 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0422 04:39:29.871144    6416 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0422 04:39:29.871157    6416 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0422 04:39:29.871171    6416 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0422 04:39:29.871175    6416 command_runner.go:130] > ExecStart=
	I0422 04:39:29.871203    6416 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0422 04:39:29.871213    6416 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0422 04:39:29.871221    6416 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0422 04:39:29.871226    6416 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0422 04:39:29.871231    6416 command_runner.go:130] > LimitNOFILE=infinity
	I0422 04:39:29.871237    6416 command_runner.go:130] > LimitNPROC=infinity
	I0422 04:39:29.871241    6416 command_runner.go:130] > LimitCORE=infinity
	I0422 04:39:29.871245    6416 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0422 04:39:29.871250    6416 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0422 04:39:29.871254    6416 command_runner.go:130] > TasksMax=infinity
	I0422 04:39:29.871261    6416 command_runner.go:130] > TimeoutStartSec=0
	I0422 04:39:29.871269    6416 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0422 04:39:29.871272    6416 command_runner.go:130] > Delegate=yes
	I0422 04:39:29.871278    6416 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0422 04:39:29.871308    6416 command_runner.go:130] > KillMode=process
	I0422 04:39:29.871312    6416 command_runner.go:130] > [Install]
	I0422 04:39:29.871316    6416 command_runner.go:130] > WantedBy=multi-user.target
	I0422 04:39:29.871416    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:39:29.884933    6416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 04:39:29.904209    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:39:29.915630    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:39:29.926586    6416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0422 04:39:29.946970    6416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:39:29.957645    6416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:39:29.972878    6416 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0422 04:39:29.973104    6416 ssh_runner.go:195] Run: which cri-dockerd
	I0422 04:39:29.975896    6416 command_runner.go:130] > /usr/bin/cri-dockerd
	I0422 04:39:29.976067    6416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 04:39:29.983521    6416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 04:39:29.997905    6416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 04:39:30.098128    6416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 04:39:30.199672    6416 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 04:39:30.199698    6416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 04:39:30.215471    6416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:39:30.324911    6416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 04:40:31.458397    6416 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0422 04:40:31.458419    6416 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0422 04:40:31.458485    6416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.048186237s)
	I0422 04:40:31.458550    6416 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0422 04:40:31.468468    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0422 04:40:31.468481    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500273741Z" level=info msg="Starting up"
	I0422 04:40:31.468494    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500896562Z" level=info msg="containerd not running, starting managed containerd"
	I0422 04:40:31.468509    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.501458070Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	I0422 04:40:31.468520    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.519154130Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0422 04:40:31.468531    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536175934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0422 04:40:31.468542    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536200901Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0422 04:40:31.468552    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536237889Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0422 04:40:31.468561    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536248409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468572    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536401321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468581    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536443904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468600    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536555068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468609    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536590399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468618    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536602655Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0422 04:40:31.468628    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536609559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468638    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536757403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468647    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536982056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468661    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538601388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468670    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538639201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0422 04:40:31.468762    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538724354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0422 04:40:31.468784    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538735079Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0422 04:40:31.468798    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538857030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0422 04:40:31.468809    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538906380Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0422 04:40:31.468816    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538916250Z" level=info msg="metadata content store policy set" policy=shared
	I0422 04:40:31.468825    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540934544Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0422 04:40:31.468836    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540980765Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0422 04:40:31.468845    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540995031Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0422 04:40:31.468854    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541005291Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0422 04:40:31.468863    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541017645Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0422 04:40:31.468872    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541059879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0422 04:40:31.468883    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541226925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0422 04:40:31.468892    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541376031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0422 04:40:31.468901    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541411674Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0422 04:40:31.468910    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541423221Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0422 04:40:31.468920    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541432259Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468930    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541440555Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468939    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541448433Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468948    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541457401Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468958    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541466668Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.468968    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541474780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.469077    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541483321Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.469088    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541490681Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0422 04:40:31.469097    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541503918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469105    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541513941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469114    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541522110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469123    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541530364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469131    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541538164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469140    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541546259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469149    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541553607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469158    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541562316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469167    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541570467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469177    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541582908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469186    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541590762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469194    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541598307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469203    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541606034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469212    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541617175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0422 04:40:31.469220    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541630384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469235    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541639723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469244    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541646814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0422 04:40:31.469254    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541690816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0422 04:40:31.469265    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541704905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0422 04:40:31.469401    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541735544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0422 04:40:31.469415    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541746288Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0422 04:40:31.469424    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541956055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0422 04:40:31.469437    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541992919Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0422 04:40:31.469444    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542053080Z" level=info msg="NRI interface is disabled by configuration."
	I0422 04:40:31.469453    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542265818Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0422 04:40:31.469462    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542368204Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0422 04:40:31.469469    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542421668Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0422 04:40:31.469477    6416 command_runner.go:130] > Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542433824Z" level=info msg="containerd successfully booted in 0.024134s"
	I0422 04:40:31.469484    6416 command_runner.go:130] > Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.521245248Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0422 04:40:31.469492    6416 command_runner.go:130] > Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.536466420Z" level=info msg="Loading containers: start."
	I0422 04:40:31.469503    6416 command_runner.go:130] > Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.670082730Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0422 04:40:31.469510    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.470397892Z" level=info msg="Loading containers: done."
	I0422 04:40:31.469520    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.476831522Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	I0422 04:40:31.469528    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.477000847Z" level=info msg="Daemon has completed initialization"
	I0422 04:40:31.469536    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495177168Z" level=info msg="API listen on /var/run/docker.sock"
	I0422 04:40:31.469543    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495332686Z" level=info msg="API listen on [::]:2376"
	I0422 04:40:31.469549    6416 command_runner.go:130] > Apr 22 11:39:29 multinode-449000-m02 systemd[1]: Started Docker Application Container Engine.
	I0422 04:40:31.469554    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0422 04:40:31.469561    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.509057098Z" level=info msg="Processing signal 'terminated'"
	I0422 04:40:31.469571    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510124902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0422 04:40:31.469580    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510320720Z" level=info msg="Daemon shutdown complete"
	I0422 04:40:31.469591    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510348907Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0422 04:40:31.469600    6416 command_runner.go:130] > Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510352277Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0422 04:40:31.469606    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0422 04:40:31.469612    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0422 04:40:31.469647    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0422 04:40:31.469655    6416 command_runner.go:130] > Apr 22 11:39:31 multinode-449000-m02 dockerd[806]: time="2024-04-22T11:39:31.552429015Z" level=info msg="Starting up"
	I0422 04:40:31.469664    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 dockerd[806]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0422 04:40:31.469673    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0422 04:40:31.469680    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0422 04:40:31.469686    6416 command_runner.go:130] > Apr 22 11:40:31 multinode-449000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0422 04:40:31.494051    6416 out.go:177] 
	W0422 04:40:31.514947    6416 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 22 11:39:27 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500273741Z" level=info msg="Starting up"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.500896562Z" level=info msg="containerd not running, starting managed containerd"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:27.501458070Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.519154130Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536175934Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536200901Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536237889Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536248409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536401321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536443904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536555068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536590399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536602655Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536609559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536757403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.536982056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538601388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538639201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538724354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538735079Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538857030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538906380Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.538916250Z" level=info msg="metadata content store policy set" policy=shared
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540934544Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540980765Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.540995031Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541005291Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541017645Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541059879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541226925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541376031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541411674Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541423221Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541432259Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541440555Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541448433Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541457401Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541466668Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541474780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541483321Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541490681Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541503918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541513941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541522110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541530364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541538164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541546259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541553607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541562316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541570467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541582908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541590762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541598307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541606034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541617175Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541630384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541639723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541646814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541690816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541704905Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541735544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541746288Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541956055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.541992919Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542053080Z" level=info msg="NRI interface is disabled by configuration."
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542265818Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542368204Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542421668Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 22 11:39:27 multinode-449000-m02 dockerd[519]: time="2024-04-22T11:39:27.542433824Z" level=info msg="containerd successfully booted in 0.024134s"
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.521245248Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.536466420Z" level=info msg="Loading containers: start."
	Apr 22 11:39:28 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:28.670082730Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.470397892Z" level=info msg="Loading containers: done."
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.476831522Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.477000847Z" level=info msg="Daemon has completed initialization"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495177168Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 22 11:39:29 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:29.495332686Z" level=info msg="API listen on [::]:2376"
	Apr 22 11:39:29 multinode-449000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 22 11:39:30 multinode-449000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.509057098Z" level=info msg="Processing signal 'terminated'"
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510124902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510320720Z" level=info msg="Daemon shutdown complete"
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510348907Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 22 11:39:30 multinode-449000-m02 dockerd[513]: time="2024-04-22T11:39:30.510352277Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 22 11:39:31 multinode-449000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:39:31 multinode-449000-m02 dockerd[806]: time="2024-04-22T11:39:31.552429015Z" level=info msg="Starting up"
	Apr 22 11:40:31 multinode-449000-m02 dockerd[806]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 22 11:40:31 multinode-449000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0422 04:40:31.515066    6416 out.go:239] * 
	W0422 04:40:31.516170    6416 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 04:40:31.600069    6416 out.go:177] 
	
	
	==> Docker <==
	Apr 22 11:38:43 multinode-449000 dockerd[833]: time="2024-04-22T11:38:43.228286882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:38:43 multinode-449000 dockerd[833]: time="2024-04-22T11:38:43.228542460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:38:43 multinode-449000 cri-dockerd[1046]: time="2024-04-22T11:38:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c7bb8def795e3b3c89afb62a0a14ce294ec7dc31ad6374bbd470a8641a3cbec/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 22 11:38:43 multinode-449000 dockerd[833]: time="2024-04-22T11:38:43.460526333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 22 11:38:43 multinode-449000 dockerd[833]: time="2024-04-22T11:38:43.460591470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 22 11:38:43 multinode-449000 dockerd[833]: time="2024-04-22T11:38:43.460603909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:38:43 multinode-449000 dockerd[833]: time="2024-04-22T11:38:43.460687081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.544393548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.544553164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.544567752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.545101494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:38:50 multinode-449000 cri-dockerd[1046]: time="2024-04-22T11:38:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/02ed7c9fdf449778af8239530221454c482d9a6a1419f5eaac5a7bf093601988/resolv.conf as [nameserver 192.169.0.1]"
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.666787668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.666828447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.666840270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:38:50 multinode-449000 dockerd[833]: time="2024-04-22T11:38:50.667082726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:39:05 multinode-449000 dockerd[827]: time="2024-04-22T11:39:05.873978401Z" level=info msg="ignoring event" container=309a55b71ab490cdb1ad6c3950b929c94b64e626d422c88a56b90a282f07f931 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 11:39:05 multinode-449000 dockerd[833]: time="2024-04-22T11:39:05.874200733Z" level=info msg="shim disconnected" id=309a55b71ab490cdb1ad6c3950b929c94b64e626d422c88a56b90a282f07f931 namespace=moby
	Apr 22 11:39:05 multinode-449000 dockerd[833]: time="2024-04-22T11:39:05.874433650Z" level=warning msg="cleaning up after shim disconnected" id=309a55b71ab490cdb1ad6c3950b929c94b64e626d422c88a56b90a282f07f931 namespace=moby
	Apr 22 11:39:05 multinode-449000 dockerd[833]: time="2024-04-22T11:39:05.874444946Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 22 11:39:05 multinode-449000 dockerd[833]: time="2024-04-22T11:39:05.886213771Z" level=warning msg="cleanup warnings time=\"2024-04-22T11:39:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 22 11:39:19 multinode-449000 dockerd[833]: time="2024-04-22T11:39:19.286110889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 22 11:39:19 multinode-449000 dockerd[833]: time="2024-04-22T11:39:19.286415775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 22 11:39:19 multinode-449000 dockerd[833]: time="2024-04-22T11:39:19.286480633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 22 11:39:19 multinode-449000 dockerd[833]: time="2024-04-22T11:39:19.286728736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	db40f4d23c28e       6e38f40d628db       About a minute ago   Running             storage-provisioner       4                   80af9001310f4       storage-provisioner
	c02932a715131       259c8277fcbbc       About a minute ago   Running             kube-scheduler            2                   02ed7c9fdf449       kube-scheduler-multinode-449000
	a0081db047079       8c811b4aec35f       About a minute ago   Running             busybox                   2                   0c7bb8def795e       busybox-fc5497c4f-lr9sv
	6917a8f9d0a25       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   ccafae0b68932       coredns-7db6d8ff4d-tnr9d
	3178a503ec2e5       4950bb10b3f87       About a minute ago   Running             kindnet-cni               2                   a5c81c7a75c2d       kindnet-pbqsb
	65ed3f8af8071       a0bf559e280cf       About a minute ago   Running             kube-proxy                2                   92412cb08dddc       kube-proxy-jrtv2
	309a55b71ab49       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   80af9001310f4       storage-provisioner
	00fe456942e1f       c7aad43836fa5       2 minutes ago        Running             kube-controller-manager   2                   f2dccc35c3521       kube-controller-manager-multinode-449000
	23bd4c54c4e57       3861cfcd7c04c       2 minutes ago        Running             etcd                      2                   109325d1fd9ae       etcd-multinode-449000
	57555e0d61a23       c42f13656d0b2       2 minutes ago        Running             kube-apiserver            2                   d3bff86cad2b5       kube-apiserver-multinode-449000
	450d9a5990703       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   ab4000fa9a58e       busybox-fc5497c4f-lr9sv
	c6d63c83b44a4       cbb01a7bd410d       4 minutes ago        Exited              coredns                   1                   429b0a81fe654       coredns-7db6d8ff4d-tnr9d
	d5b3b5d5a4688       4950bb10b3f87       4 minutes ago        Exited              kindnet-cni               1                   be4f0b4b588ef       kindnet-pbqsb
	8fd92d3d559f9       a0bf559e280cf       4 minutes ago        Exited              kube-proxy                1                   d272ef1c679ea       kube-proxy-jrtv2
	62b5721c79fa4       3861cfcd7c04c       4 minutes ago        Exited              etcd                      1                   d0dcd34254661       etcd-multinode-449000
	1df263b70ea29       c7aad43836fa5       4 minutes ago        Exited              kube-controller-manager   1                   d6f28e2bec076       kube-controller-manager-multinode-449000
	8ac9862246998       c42f13656d0b2       4 minutes ago        Exited              kube-apiserver            1                   46dba4d36ef75       kube-apiserver-multinode-449000
	4cbfdf285d1b5       259c8277fcbbc       4 minutes ago        Exited              kube-scheduler            1                   84c0422896cce       kube-scheduler-multinode-449000
	
	
	==> coredns [6917a8f9d0a2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34657 - 13343 "HINFO IN 2064618581389765346.4727642813022935805. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004674008s
	
	
	==> coredns [c6d63c83b44a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52605 - 63922 "HINFO IN 89135167053439384.4040220400262186119. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.005654228s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-449000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-449000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=multinode-449000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T04_29_13_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:29:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-449000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:40:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:38:38 +0000   Mon, 22 Apr 2024 11:29:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:38:38 +0000   Mon, 22 Apr 2024 11:29:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:38:38 +0000   Mon, 22 Apr 2024 11:29:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:38:38 +0000   Mon, 22 Apr 2024 11:38:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.16
	  Hostname:    multinode-449000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8081a797ea6d4d8c98d30c6228015410
	  System UUID:                586a44d4-0000-0000-8ddd-2786953ca4c9
	  Boot ID:                    c8afed41-9b0e-4388-b865-3fd3cca351b6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lr9sv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 coredns-7db6d8ff4d-tnr9d                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-multinode-449000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-pbqsb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-multinode-449000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-449000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-jrtv2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-multinode-449000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)      kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)      kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)      kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                    node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	  Normal  NodeReady                11m                    kubelet          Node multinode-449000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)    kubelet          Node multinode-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)    kubelet          Node multinode-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)    kubelet          Node multinode-449000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                   node-controller  Node multinode-449000 event: Registered Node multinode-449000 in Controller
	
	
	Name:               multinode-449000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-449000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=multinode-449000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T04_36_49_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:36:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-449000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:37:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 11:36:54 +0000   Mon, 22 Apr 2024 11:39:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 11:36:54 +0000   Mon, 22 Apr 2024 11:39:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 11:36:54 +0000   Mon, 22 Apr 2024 11:39:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 11:36:54 +0000   Mon, 22 Apr 2024 11:39:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.17
	  Hostname:    multinode-449000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 413c856a57854d0c9b3de21bdb3a1aa4
	  System UUID:                6bb74ba2-0000-0000-b75b-6222ca7aafe0
	  Boot ID:                    fe9cb23e-03cf-4e45-bd97-bd186f921544
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8bp9v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kindnet-sm2l6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m39s
	  kube-system                 kube-proxy-lx9ft           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m31s                  kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m39s (x2 over 8m39s)  kubelet          Node multinode-449000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s (x2 over 8m39s)  kubelet          Node multinode-449000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s (x2 over 8m39s)  kubelet          Node multinode-449000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m26s                  kubelet          Node multinode-449000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m44s (x2 over 3m44s)  kubelet          Node multinode-449000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x2 over 3m44s)  kubelet          Node multinode-449000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x2 over 3m44s)  kubelet          Node multinode-449000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m40s                  node-controller  Node multinode-449000-m02 event: Registered Node multinode-449000-m02 in Controller
	  Normal  NodeReady                3m39s                  kubelet          Node multinode-449000-m02 status is now: NodeReady
	  Normal  RegisteredNode           106s                   node-controller  Node multinode-449000-m02 event: Registered Node multinode-449000-m02 in Controller
	  Normal  NodeNotReady             66s                    node-controller  Node multinode-449000-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.363439] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006903] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.578431] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.263959] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.107607] systemd-fstab-generator[474]: Ignoring "noauto" option for root device
	[  +0.122707] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.821215] systemd-fstab-generator[722]: Ignoring "noauto" option for root device
	[  +0.057751] kauditd_printk_skb: 81 callbacks suppressed
	[  +0.228877] systemd-fstab-generator[792]: Ignoring "noauto" option for root device
	[  +0.115546] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.130443] systemd-fstab-generator[818]: Ignoring "noauto" option for root device
	[  +2.445603] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.111579] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +0.102883] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[  +0.132494] systemd-fstab-generator[1038]: Ignoring "noauto" option for root device
	[  +0.410535] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +1.575392] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.054290] kauditd_printk_skb: 227 callbacks suppressed
	[  +5.517801] kauditd_printk_skb: 52 callbacks suppressed
	[  +2.565023] systemd-fstab-generator[2014]: Ignoring "noauto" option for root device
	[  +4.789488] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.638718] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [23bd4c54c4e5] <==
	{"level":"info","ts":"2024-04-22T11:38:31.917648Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T11:38:31.917791Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T11:38:31.918164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa switched to configuration voters=(1317664063532327594)"}
	{"level":"info","ts":"2024-04-22T11:38:31.918281Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","added-peer-id":"1249487c082462aa","added-peer-peer-urls":["https://192.169.0.16:2380"]}
	{"level":"info","ts":"2024-04-22T11:38:31.918467Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T11:38:31.918594Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T11:38:31.928193Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T11:38:31.928471Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1249487c082462aa","initial-advertise-peer-urls":["https://192.169.0.16:2380"],"listen-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.16:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T11:38:31.92864Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T11:38:31.929011Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-22T11:38:31.929099Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-22T11:38:33.508037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-22T11:38:33.508134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-22T11:38:33.508159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgPreVoteResp from 1249487c082462aa at term 3"}
	{"level":"info","ts":"2024-04-22T11:38:33.508204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became candidate at term 4"}
	{"level":"info","ts":"2024-04-22T11:38:33.508454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgVoteResp from 1249487c082462aa at term 4"}
	{"level":"info","ts":"2024-04-22T11:38:33.508469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became leader at term 4"}
	{"level":"info","ts":"2024-04-22T11:38:33.508476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1249487c082462aa elected leader 1249487c082462aa at term 4"}
	{"level":"info","ts":"2024-04-22T11:38:33.510534Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1249487c082462aa","local-member-attributes":"{Name:multinode-449000 ClientURLs:[https://192.169.0.16:2379]}","request-path":"/0/members/1249487c082462aa/attributes","cluster-id":"1e23f9358b15cc2f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T11:38:33.510849Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:38:33.511013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T11:38:33.511091Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T11:38:33.511104Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:38:33.513019Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T11:38:33.514659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.16:2379"}
	
	
	==> etcd [62b5721c79fa] <==
	{"level":"info","ts":"2024-04-22T11:36:07.930481Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-22T11:36:09.288616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T11:36:09.288761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T11:36:09.288895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgPreVoteResp from 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-04-22T11:36:09.288951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T11:36:09.289054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgVoteResp from 1249487c082462aa at term 3"}
	{"level":"info","ts":"2024-04-22T11:36:09.289106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became leader at term 3"}
	{"level":"info","ts":"2024-04-22T11:36:09.289235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1249487c082462aa elected leader 1249487c082462aa at term 3"}
	{"level":"info","ts":"2024-04-22T11:36:09.290452Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1249487c082462aa","local-member-attributes":"{Name:multinode-449000 ClientURLs:[https://192.169.0.16:2379]}","request-path":"/0/members/1249487c082462aa/attributes","cluster-id":"1e23f9358b15cc2f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T11:36:09.290544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:36:09.290562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:36:09.292423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T11:36:09.29128Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T11:36:09.292673Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T11:36:09.304127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.16:2379"}
	{"level":"info","ts":"2024-04-22T11:38:02.221824Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T11:38:02.221882Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-449000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"]}
	{"level":"warn","ts":"2024-04-22T11:38:02.221947Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:38:02.222036Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:38:02.24058Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:38:02.240626Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T11:38:02.240665Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1249487c082462aa","current-leader-member-id":"1249487c082462aa"}
	{"level":"info","ts":"2024-04-22T11:38:02.243481Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-22T11:38:02.243632Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-04-22T11:38:02.243646Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-449000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"]}
	
	
	==> kernel <==
	 11:40:33 up 2 min,  0 users,  load average: 0.57, 0.29, 0.11
	Linux multinode-449000 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3178a503ec2e] <==
	I0422 11:39:26.906721       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:39:36.918358       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:39:36.918568       1 main.go:227] handling current node
	I0422 11:39:36.918680       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:39:36.918722       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:39:46.923170       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:39:46.923204       1 main.go:227] handling current node
	I0422 11:39:46.923211       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:39:46.923216       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:39:56.926638       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:39:56.926670       1 main.go:227] handling current node
	I0422 11:39:56.926678       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:39:56.926683       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:40:06.930700       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:40:06.930733       1 main.go:227] handling current node
	I0422 11:40:06.930741       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:40:06.930745       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:40:16.942022       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:40:16.942196       1 main.go:227] handling current node
	I0422 11:40:16.942315       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:40:16.942386       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:40:26.947417       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:40:26.947451       1 main.go:227] handling current node
	I0422 11:40:26.947463       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:40:26.947468       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d5b3b5d5a468] <==
	I0422 11:37:12.649062       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0422 11:37:12.649086       1 main.go:250] Node multinode-449000-m03 has CIDR [10.244.3.0/24] 
	I0422 11:37:22.652401       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:37:22.652434       1 main.go:227] handling current node
	I0422 11:37:22.652443       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:37:22.652448       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:37:22.652672       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0422 11:37:22.652700       1 main.go:250] Node multinode-449000-m03 has CIDR [10.244.3.0/24] 
	I0422 11:37:32.657035       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:37:32.657133       1 main.go:227] handling current node
	I0422 11:37:32.657153       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:37:32.657180       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:37:32.657359       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0422 11:37:32.657414       1 main.go:250] Node multinode-449000-m03 has CIDR [10.244.3.0/24] 
	I0422 11:37:42.664255       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:37:42.664290       1 main.go:227] handling current node
	I0422 11:37:42.664298       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:37:42.664302       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	I0422 11:37:42.664605       1 main.go:223] Handling node with IPs: map[192.169.0.18:{}]
	I0422 11:37:42.664633       1 main.go:250] Node multinode-449000-m03 has CIDR [10.244.2.0/24] 
	I0422 11:37:42.664678       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.18 Flags: [] Table: 0} 
	I0422 11:37:52.668597       1 main.go:223] Handling node with IPs: map[192.169.0.16:{}]
	I0422 11:37:52.668710       1 main.go:227] handling current node
	I0422 11:37:52.668733       1 main.go:223] Handling node with IPs: map[192.169.0.17:{}]
	I0422 11:37:52.668747       1 main.go:250] Node multinode-449000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [57555e0d61a2] <==
	I0422 11:38:34.435557       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 11:38:34.435626       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 11:38:34.435635       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 11:38:34.436169       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 11:38:34.437766       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 11:38:34.438375       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 11:38:34.440370       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 11:38:34.440434       1 aggregator.go:165] initial CRD sync complete...
	I0422 11:38:34.440486       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 11:38:34.440546       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 11:38:34.440553       1 cache.go:39] Caches are synced for autoregister controller
	I0422 11:38:34.441214       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0422 11:38:34.455172       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 11:38:34.455207       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 11:38:34.456037       1 policy_source.go:224] refreshing policies
	I0422 11:38:34.508136       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 11:38:35.351868       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0422 11:38:35.547830       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.16]
	I0422 11:38:35.548588       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 11:38:35.552683       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 11:38:36.412303       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 11:38:36.530498       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 11:38:36.540455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 11:38:36.584001       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 11:38:36.588502       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [8ac986224699] <==
	W0422 11:38:03.238904       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.238977       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.239118       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.239193       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.239357       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.239516       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.239785       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.239870       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.239986       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240119       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240243       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240284       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240388       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240579       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240625       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240645       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240745       1 logging.go:59] [core] [Channel #2 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.240793       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.241038       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.241152       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.241044       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.241360       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.241506       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.241563       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:38:03.241579       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [00fe456942e1] <==
	I0422 11:38:47.049375       1 shared_informer.go:320] Caches are synced for PV protection
	I0422 11:38:47.050742       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0422 11:38:47.052279       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0422 11:38:47.054075       1 shared_informer.go:320] Caches are synced for ephemeral
	I0422 11:38:47.056662       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0422 11:38:47.056785       1 shared_informer.go:320] Caches are synced for persistent volume
	I0422 11:38:47.058324       1 shared_informer.go:320] Caches are synced for TTL
	I0422 11:38:47.061109       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0422 11:38:47.067671       1 shared_informer.go:320] Caches are synced for PVC protection
	I0422 11:38:47.111203       1 shared_informer.go:320] Caches are synced for disruption
	I0422 11:38:47.145773       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0422 11:38:47.212674       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0422 11:38:47.221074       1 shared_informer.go:320] Caches are synced for stateful set
	I0422 11:38:47.240065       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 11:38:47.254001       1 shared_informer.go:320] Caches are synced for daemon sets
	I0422 11:38:47.263207       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 11:38:47.679561       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 11:38:47.686248       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 11:38:47.686318       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 11:39:27.075978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.65385ms"
	I0422 11:39:27.076163       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.199µs"
	I0422 11:39:47.023461       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jkzvq"
	I0422 11:39:47.032804       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jkzvq"
	I0422 11:39:47.032839       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4q52c"
	I0422 11:39:47.042261       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4q52c"
	
	
	==> kube-controller-manager [1df263b70ea2] <==
	I0422 11:36:49.211986       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-449000-m02\" does not exist"
	I0422 11:36:49.212027       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-449000-m03"
	I0422 11:36:49.220558       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-449000-m02" podCIDRs=["10.244.1.0/24"]
	I0422 11:36:50.141814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.832µs"
	I0422 11:36:54.280645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-449000-m02"
	I0422 11:37:03.158676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.488µs"
	I0422 11:37:03.235775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.754µs"
	I0422 11:37:03.237867       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.707µs"
	I0422 11:37:03.386812       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-449000-m02"
	I0422 11:37:38.498519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.626113ms"
	I0422 11:37:38.498791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="229.88µs"
	I0422 11:37:38.508777       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.757943ms"
	I0422 11:37:38.508929       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.7µs"
	I0422 11:37:39.840614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.772497ms"
	I0422 11:37:39.842552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.382µs"
	I0422 11:37:41.725308       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-449000-m02"
	I0422 11:37:42.492137       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-449000-m03\" does not exist"
	I0422 11:37:42.493035       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-449000-m02"
	I0422 11:37:42.498816       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-449000-m03" podCIDRs=["10.244.2.0/24"]
	I0422 11:37:43.409280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.77µs"
	I0422 11:37:43.422805       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.794µs"
	I0422 11:37:43.425658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.719µs"
	I0422 11:37:43.427249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.116µs"
	I0422 11:37:47.761866       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-449000-m02"
	I0422 11:37:50.815805       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-449000-m02"
	
	
	==> kube-proxy [65ed3f8af807] <==
	I0422 11:38:35.967005       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:38:35.982511       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.16"]
	I0422 11:38:36.048116       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:38:36.048140       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:38:36.048154       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:38:36.051851       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:38:36.052440       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:38:36.052452       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:38:36.053916       1 config.go:192] "Starting service config controller"
	I0422 11:38:36.054154       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:38:36.054240       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:38:36.054248       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:38:36.054954       1 config.go:319] "Starting node config controller"
	I0422 11:38:36.055972       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:38:36.154368       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 11:38:36.154415       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:38:36.156200       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8fd92d3d559f] <==
	I0422 11:36:11.532154       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:36:11.548563       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.16"]
	I0422 11:36:11.607308       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:36:11.607346       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:36:11.607359       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:36:11.609818       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:36:11.610492       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:36:11.610524       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:36:11.611811       1 config.go:192] "Starting service config controller"
	I0422 11:36:11.611963       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:36:11.612067       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:36:11.612092       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:36:11.613813       1 config.go:319] "Starting node config controller"
	I0422 11:36:11.613838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:36:11.712163       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 11:36:11.712177       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:36:11.714029       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4cbfdf285d1b] <==
	I0422 11:36:08.629179       1 serving.go:380] Generated self-signed cert in-memory
	I0422 11:36:10.314434       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 11:36:10.314467       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:36:10.317689       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0422 11:36:10.317724       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0422 11:36:10.317745       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 11:36:10.318479       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 11:36:10.318073       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0422 11:36:10.318779       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0422 11:36:10.319541       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 11:36:10.319697       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 11:36:10.418433       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0422 11:36:10.418775       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 11:36:10.419854       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0422 11:38:02.213387       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0422 11:38:02.213441       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0422 11:38:02.213575       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c02932a71513] <==
	I0422 11:38:51.135220       1 serving.go:380] Generated self-signed cert in-memory
	I0422 11:38:51.733851       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 11:38:51.733896       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:38:51.737919       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 11:38:51.738176       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0422 11:38:51.738230       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0422 11:38:51.738322       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 11:38:51.739235       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 11:38:51.739264       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 11:38:51.739275       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0422 11:38:51.739295       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0422 11:38:51.839543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0422 11:38:51.839674       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 11:38:51.840509       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Apr 22 11:38:37 multinode-449000 kubelet[1286]: E0422 11:38:37.236351    1286 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-tnr9d" podUID="20633bf5-f995-44a1-b778-441b906496cd"
	Apr 22 11:38:38 multinode-449000 kubelet[1286]: I0422 11:38:38.186805    1286 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Apr 22 11:38:38 multinode-449000 kubelet[1286]: E0422 11:38:38.731608    1286 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 22 11:38:38 multinode-449000 kubelet[1286]: E0422 11:38:38.731725    1286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/20633bf5-f995-44a1-b778-441b906496cd-config-volume podName:20633bf5-f995-44a1-b778-441b906496cd nodeName:}" failed. No retries permitted until 2024-04-22 11:38:42.731707494 +0000 UTC m=+12.624693992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20633bf5-f995-44a1-b778-441b906496cd-config-volume") pod "coredns-7db6d8ff4d-tnr9d" (UID: "20633bf5-f995-44a1-b778-441b906496cd") : object "kube-system"/"coredns" not registered
	Apr 22 11:38:38 multinode-449000 kubelet[1286]: E0422 11:38:38.832410    1286 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Apr 22 11:38:38 multinode-449000 kubelet[1286]: E0422 11:38:38.832481    1286 projected.go:200] Error preparing data for projected volume kube-api-access-sjlmv for pod default/busybox-fc5497c4f-lr9sv: object "default"/"kube-root-ca.crt" not registered
	Apr 22 11:38:38 multinode-449000 kubelet[1286]: E0422 11:38:38.832548    1286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72167db2-5006-4cb7-b32b-20f7cc00e57c-kube-api-access-sjlmv podName:72167db2-5006-4cb7-b32b-20f7cc00e57c nodeName:}" failed. No retries permitted until 2024-04-22 11:38:42.832532652 +0000 UTC m=+12.725519153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sjlmv" (UniqueName: "kubernetes.io/projected/72167db2-5006-4cb7-b32b-20f7cc00e57c-kube-api-access-sjlmv") pod "busybox-fc5497c4f-lr9sv" (UID: "72167db2-5006-4cb7-b32b-20f7cc00e57c") : object "default"/"kube-root-ca.crt" not registered
	Apr 22 11:38:43 multinode-449000 kubelet[1286]: I0422 11:38:43.058117    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccafae0b6893214d4b2e2a24410d43a1478f035f705560ac5c055e87cb5120ad"
	Apr 22 11:38:50 multinode-449000 kubelet[1286]: I0422 11:38:50.184119    1286 topology_manager.go:215] "Topology Admit Handler" podUID="bfbe1363530b1149f8b4b1a13313452a" podNamespace="kube-system" podName="kube-scheduler-multinode-449000"
	Apr 22 11:38:50 multinode-449000 kubelet[1286]: I0422 11:38:50.226686    1286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bfbe1363530b1149f8b4b1a13313452a-kubeconfig\") pod \"kube-scheduler-multinode-449000\" (UID: \"bfbe1363530b1149f8b4b1a13313452a\") " pod="kube-system/kube-scheduler-multinode-449000"
	Apr 22 11:38:51 multinode-449000 kubelet[1286]: I0422 11:38:51.166664    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-multinode-449000" podStartSLOduration=1.16665311 podStartE2EDuration="1.16665311s" podCreationTimestamp="2024-04-22 11:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 11:38:51.165856879 +0000 UTC m=+21.058843368" watchObservedRunningTime="2024-04-22 11:38:51.16665311 +0000 UTC m=+21.059639592"
	Apr 22 11:39:06 multinode-449000 kubelet[1286]: I0422 11:39:06.272270    1286 scope.go:117] "RemoveContainer" containerID="7fd342a68d8435a32e16e7b9a6311a99cc8c741a7f2cd58495d70a0587e07f2d"
	Apr 22 11:39:06 multinode-449000 kubelet[1286]: I0422 11:39:06.272485    1286 scope.go:117] "RemoveContainer" containerID="309a55b71ab490cdb1ad6c3950b929c94b64e626d422c88a56b90a282f07f931"
	Apr 22 11:39:06 multinode-449000 kubelet[1286]: E0422 11:39:06.272588    1286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f286f444-3ade-4e54-85bb-8577f0234cca)\"" pod="kube-system/storage-provisioner" podUID="f286f444-3ade-4e54-85bb-8577f0234cca"
	Apr 22 11:39:19 multinode-449000 kubelet[1286]: I0422 11:39:19.235261    1286 scope.go:117] "RemoveContainer" containerID="309a55b71ab490cdb1ad6c3950b929c94b64e626d422c88a56b90a282f07f931"
	Apr 22 11:39:30 multinode-449000 kubelet[1286]: E0422 11:39:30.260510    1286 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:39:30 multinode-449000 kubelet[1286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:39:30 multinode-449000 kubelet[1286]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:39:30 multinode-449000 kubelet[1286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:39:30 multinode-449000 kubelet[1286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:40:30 multinode-449000 kubelet[1286]: E0422 11:40:30.262197    1286 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:40:30 multinode-449000 kubelet[1286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:40:30 multinode-449000 kubelet[1286]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:40:30 multinode-449000 kubelet[1286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:40:30 multinode-449000 kubelet[1286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-449000 -n multinode-449000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-449000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (144.82s)

                                                
                                    
x
+
TestPreload (229.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-194000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0422 04:43:27.452109    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 04:43:44.406467    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-194000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m48.732939926s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-194000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-194000 image pull gcr.io/k8s-minikube/busybox: (1.209133923s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-194000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-194000: (8.388490658s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-194000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0422 04:44:53.355536    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
preload_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-194000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : exit status 90 (1m45.71138142s)

                                                
                                                
-- stdout --
	* [test-preload-194000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the hyperkit driver based on existing profile
	* Starting "test-preload-194000" primary control-plane node in "test-preload-194000" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing hyperkit VM for "test-preload-194000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 04:44:44.923845    6753 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:44:44.924024    6753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:44:44.924030    6753 out.go:304] Setting ErrFile to fd 2...
	I0422 04:44:44.924033    6753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:44:44.924220    6753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:44:44.925580    6753 out.go:298] Setting JSON to false
	I0422 04:44:44.947440    6753 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4454,"bootTime":1713781830,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 04:44:44.947522    6753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 04:44:44.968929    6753 out.go:177] * [test-preload-194000] minikube v1.33.0 on Darwin 14.4.1
	I0422 04:44:45.032425    6753 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 04:44:45.011702    6753 notify.go:220] Checking for updates...
	I0422 04:44:45.053712    6753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 04:44:45.095328    6753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 04:44:45.116790    6753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 04:44:45.137780    6753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	I0422 04:44:45.158577    6753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 04:44:45.180433    6753 config.go:182] Loaded profile config "test-preload-194000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0422 04:44:45.181203    6753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:44:45.181292    6753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:44:45.190630    6753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52432
	I0422 04:44:45.190970    6753 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:44:45.191412    6753 main.go:141] libmachine: Using API Version  1
	I0422 04:44:45.191426    6753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:44:45.191630    6753 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:44:45.191764    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:44:45.212486    6753 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 04:44:45.233708    6753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 04:44:45.234281    6753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:44:45.234336    6753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:44:45.244442    6753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52434
	I0422 04:44:45.244795    6753 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:44:45.245117    6753 main.go:141] libmachine: Using API Version  1
	I0422 04:44:45.245128    6753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:44:45.245371    6753 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:44:45.245493    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:44:45.273650    6753 out.go:177] * Using the hyperkit driver based on existing profile
	I0422 04:44:45.315504    6753 start.go:297] selected driver: hyperkit
	I0422 04:44:45.315525    6753 start.go:901] validating driver "hyperkit" against &{Name:test-preload-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4
ClusterName:test-preload-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.20 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 04:44:45.315678    6753 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 04:44:45.319093    6753 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 04:44:45.319195    6753 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18711-1033/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0422 04:44:45.327381    6753 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0422 04:44:45.331301    6753 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:44:45.331322    6753 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0422 04:44:45.331413    6753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 04:44:45.331477    6753 cni.go:84] Creating CNI manager for ""
	I0422 04:44:45.331493    6753 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 04:44:45.331555    6753 start.go:340] cluster config:
	{Name:test-preload-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-194000 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.20 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 04:44:45.331666    6753 iso.go:125] acquiring lock: {Name:mk174d786084574fba345b763762a2b8adb514c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 04:44:45.373717    6753 out.go:177] * Starting "test-preload-194000" primary control-plane node in "test-preload-194000" cluster
	I0422 04:44:45.394466    6753 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0422 04:44:45.443925    6753 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0422 04:44:45.443962    6753 cache.go:56] Caching tarball of preloaded images
	I0422 04:44:45.444471    6753 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0422 04:44:45.465827    6753 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0422 04:44:45.507641    6753 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0422 04:44:45.584542    6753 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4?checksum=md5:20cbd62a1b5d1968f21881a4a0f4f59e -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0422 04:44:50.607069    6753 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0422 04:44:50.607270    6753 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0422 04:44:51.193031    6753 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on docker
	I0422 04:44:51.193113    6753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/test-preload-194000/config.json ...
	I0422 04:44:51.193531    6753 start.go:360] acquireMachinesLock for test-preload-194000: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 04:44:51.193606    6753 start.go:364] duration metric: took 61.139µs to acquireMachinesLock for "test-preload-194000"
	I0422 04:44:51.193624    6753 start.go:96] Skipping create...Using existing machine configuration
	I0422 04:44:51.193635    6753 fix.go:54] fixHost starting: 
	I0422 04:44:51.193902    6753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:44:51.193921    6753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:44:51.203413    6753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52440
	I0422 04:44:51.203781    6753 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:44:51.204114    6753 main.go:141] libmachine: Using API Version  1
	I0422 04:44:51.204124    6753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:44:51.204338    6753 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:44:51.204441    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:44:51.204541    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetState
	I0422 04:44:51.204625    6753 main.go:141] libmachine: (test-preload-194000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:44:51.204718    6753 main.go:141] libmachine: (test-preload-194000) DBG | hyperkit pid from json: 6652
	I0422 04:44:51.205638    6753 main.go:141] libmachine: (test-preload-194000) DBG | hyperkit pid 6652 missing from process table
	I0422 04:44:51.205672    6753 fix.go:112] recreateIfNeeded on test-preload-194000: state=Stopped err=<nil>
	I0422 04:44:51.205691    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	W0422 04:44:51.205781    6753 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 04:44:51.249438    6753 out.go:177] * Restarting existing hyperkit VM for "test-preload-194000" ...
	I0422 04:44:51.270500    6753 main.go:141] libmachine: (test-preload-194000) Calling .Start
	I0422 04:44:51.270747    6753 main.go:141] libmachine: (test-preload-194000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:44:51.270821    6753 main.go:141] libmachine: (test-preload-194000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/hyperkit.pid
	I0422 04:44:51.272632    6753 main.go:141] libmachine: (test-preload-194000) DBG | hyperkit pid 6652 missing from process table
	I0422 04:44:51.272652    6753 main.go:141] libmachine: (test-preload-194000) DBG | pid 6652 is in state "Stopped"
	I0422 04:44:51.272672    6753 main.go:141] libmachine: (test-preload-194000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/hyperkit.pid...
	I0422 04:44:51.273211    6753 main.go:141] libmachine: (test-preload-194000) DBG | Using UUID d0413403-6def-475f-b2f4-952818e9c909
	I0422 04:44:51.384029    6753 main.go:141] libmachine: (test-preload-194000) DBG | Generated MAC 12:fb:ad:2e:1d:5a
	I0422 04:44:51.384053    6753 main.go:141] libmachine: (test-preload-194000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-194000
	I0422 04:44:51.384175    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d0413403-6def-475f-b2f4-952818e9c909", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0fc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0422 04:44:51.384205    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d0413403-6def-475f-b2f4-952818e9c909", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c0fc0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0422 04:44:51.384289    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d0413403-6def-475f-b2f4-952818e9c909", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/test-preload-194000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/bzimage,/Users/jenkins/m
inikube-integration/18711-1033/.minikube/machines/test-preload-194000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-194000"}
	I0422 04:44:51.384334    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d0413403-6def-475f-b2f4-952818e9c909 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/test-preload-194000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload
-194000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=test-preload-194000"
	I0422 04:44:51.384348    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0422 04:44:51.385741    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 DEBUG: hyperkit: Pid is 6772
	I0422 04:44:51.386519    6753 main.go:141] libmachine: (test-preload-194000) DBG | Attempt 0
	I0422 04:44:51.386535    6753 main.go:141] libmachine: (test-preload-194000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:44:51.386636    6753 main.go:141] libmachine: (test-preload-194000) DBG | hyperkit pid from json: 6772
	I0422 04:44:51.388402    6753 main.go:141] libmachine: (test-preload-194000) DBG | Searching for 12:fb:ad:2e:1d:5a in /var/db/dhcpd_leases ...
	I0422 04:44:51.388477    6753 main.go:141] libmachine: (test-preload-194000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0422 04:44:51.388495    6753 main.go:141] libmachine: (test-preload-194000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:12:fb:ad:2e:1d:5a ID:1,12:fb:ad:2e:1d:5a Lease:0x66279ec0}
	I0422 04:44:51.388509    6753 main.go:141] libmachine: (test-preload-194000) DBG | Found match: 12:fb:ad:2e:1d:5a
	I0422 04:44:51.388522    6753 main.go:141] libmachine: (test-preload-194000) DBG | IP: 192.169.0.20
	I0422 04:44:51.388610    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetConfigRaw
	I0422 04:44:51.389388    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetIP
	I0422 04:44:51.389581    6753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/test-preload-194000/config.json ...
	I0422 04:44:51.390045    6753 machine.go:94] provisionDockerMachine start ...
	I0422 04:44:51.390064    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:44:51.390182    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:44:51.390278    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:44:51.390385    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:44:51.390507    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:44:51.390621    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:44:51.390752    6753 main.go:141] libmachine: Using SSH client type: native
	I0422 04:44:51.390962    6753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x861fb80] 0x86228e0 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0422 04:44:51.390971    6753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 04:44:51.394018    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0422 04:44:51.447982    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0422 04:44:51.448732    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:44:51.448751    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:44:51.448759    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:44:51.448768    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:44:51.828564    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0422 04:44:51.828583    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0422 04:44:51.943342    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 04:44:51.943371    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 04:44:51.943380    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 04:44:51.943386    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 04:44:51.944237    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0422 04:44:51.944247    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0422 04:44:57.189204    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0422 04:44:57.189235    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0422 04:44:57.189242    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0422 04:44:57.212875    6753 main.go:141] libmachine: (test-preload-194000) DBG | 2024/04/22 04:44:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0422 04:45:26.455499    6753 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 04:45:26.455513    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetMachineName
	I0422 04:45:26.455645    6753 buildroot.go:166] provisioning hostname "test-preload-194000"
	I0422 04:45:26.455657    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetMachineName
	I0422 04:45:26.455762    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:26.455886    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:26.455974    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.456059    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.456139    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:26.456259    6753 main.go:141] libmachine: Using SSH client type: native
	I0422 04:45:26.456405    6753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x861fb80] 0x86228e0 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0422 04:45:26.456417    6753 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-194000 && echo "test-preload-194000" | sudo tee /etc/hostname
	I0422 04:45:26.521962    6753 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-194000
	
	I0422 04:45:26.521985    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:26.522117    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:26.522223    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.522315    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.522413    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:26.522529    6753 main.go:141] libmachine: Using SSH client type: native
	I0422 04:45:26.522688    6753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x861fb80] 0x86228e0 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0422 04:45:26.522699    6753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-194000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-194000/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-194000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 04:45:26.584512    6753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 04:45:26.584534    6753 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18711-1033/.minikube CaCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18711-1033/.minikube}
	I0422 04:45:26.584548    6753 buildroot.go:174] setting up certificates
	I0422 04:45:26.584553    6753 provision.go:84] configureAuth start
	I0422 04:45:26.584560    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetMachineName
	I0422 04:45:26.584698    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetIP
	I0422 04:45:26.584795    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:26.584881    6753 provision.go:143] copyHostCerts
	I0422 04:45:26.584979    6753 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem, removing ...
	I0422 04:45:26.584989    6753 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 04:45:26.585118    6753 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem (1082 bytes)
	I0422 04:45:26.585401    6753 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem, removing ...
	I0422 04:45:26.585408    6753 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 04:45:26.585488    6753 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem (1123 bytes)
	I0422 04:45:26.585727    6753 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem, removing ...
	I0422 04:45:26.585734    6753 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 04:45:26.585823    6753 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem (1675 bytes)
	I0422 04:45:26.585995    6753 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem org=jenkins.test-preload-194000 san=[127.0.0.1 192.169.0.20 localhost minikube test-preload-194000]
	I0422 04:45:26.640648    6753 provision.go:177] copyRemoteCerts
	I0422 04:45:26.640707    6753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 04:45:26.640725    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:26.640853    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:26.640940    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.641022    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:26.641099    6753 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/id_rsa Username:docker}
	I0422 04:45:26.677031    6753 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0422 04:45:26.695930    6753 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0422 04:45:26.714736    6753 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 04:45:26.733675    6753 provision.go:87] duration metric: took 149.102698ms to configureAuth
	I0422 04:45:26.733687    6753 buildroot.go:189] setting minikube options for container-runtime
	I0422 04:45:26.733833    6753 config.go:182] Loaded profile config "test-preload-194000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0422 04:45:26.733847    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:45:26.733983    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:26.734080    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:26.734177    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.734249    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.734328    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:26.734442    6753 main.go:141] libmachine: Using SSH client type: native
	I0422 04:45:26.734570    6753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x861fb80] 0x86228e0 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0422 04:45:26.734577    6753 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 04:45:26.788212    6753 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0422 04:45:26.788223    6753 buildroot.go:70] root file system type: tmpfs
	I0422 04:45:26.788299    6753 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 04:45:26.788318    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:26.788442    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:26.788538    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.788623    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.788718    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:26.788849    6753 main.go:141] libmachine: Using SSH client type: native
	I0422 04:45:26.788991    6753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x861fb80] 0x86228e0 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0422 04:45:26.789035    6753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 04:45:26.855322    6753 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 04:45:26.855353    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:26.855505    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:26.855610    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.855700    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:26.855794    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:26.855918    6753 main.go:141] libmachine: Using SSH client type: native
	I0422 04:45:26.856066    6753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x861fb80] 0x86228e0 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0422 04:45:26.856083    6753 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 04:45:28.417844    6753 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0422 04:45:28.417859    6753 machine.go:97] duration metric: took 37.026694797s to provisionDockerMachine
	I0422 04:45:28.417871    6753 start.go:293] postStartSetup for "test-preload-194000" (driver="hyperkit")
	I0422 04:45:28.417878    6753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 04:45:28.417888    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:45:28.418084    6753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 04:45:28.418096    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:28.418189    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:28.418275    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:28.418378    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:28.418472    6753 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/id_rsa Username:docker}
	I0422 04:45:28.452599    6753 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 04:45:28.455740    6753 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 04:45:28.455752    6753 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/addons for local assets ...
	I0422 04:45:28.455858    6753 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/files for local assets ...
	I0422 04:45:28.456045    6753 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> 14842.pem in /etc/ssl/certs
	I0422 04:45:28.456263    6753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 04:45:28.463441    6753 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /etc/ssl/certs/14842.pem (1708 bytes)
	I0422 04:45:28.483558    6753 start.go:296] duration metric: took 65.676912ms for postStartSetup
	I0422 04:45:28.483581    6753 fix.go:56] duration metric: took 37.288831443s for fixHost
	I0422 04:45:28.483594    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:28.483720    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:28.483818    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:28.483909    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:28.483992    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:28.484129    6753 main.go:141] libmachine: Using SSH client type: native
	I0422 04:45:28.484268    6753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x861fb80] 0x86228e0 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0422 04:45:28.484276    6753 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 04:45:28.539594    6753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713786328.611260680
	
	I0422 04:45:28.539605    6753 fix.go:216] guest clock: 1713786328.611260680
	I0422 04:45:28.539610    6753 fix.go:229] Guest: 2024-04-22 04:45:28.61126068 -0700 PDT Remote: 2024-04-22 04:45:28.483584 -0700 PDT m=+43.600399153 (delta=127.67668ms)
	I0422 04:45:28.539644    6753 fix.go:200] guest clock delta is within tolerance: 127.67668ms
	I0422 04:45:28.539649    6753 start.go:83] releasing machines lock for "test-preload-194000", held for 37.344915973s
	I0422 04:45:28.539669    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:45:28.539798    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetIP
	I0422 04:45:28.539901    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:45:28.540198    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:45:28.540296    6753 main.go:141] libmachine: (test-preload-194000) Calling .DriverName
	I0422 04:45:28.540373    6753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 04:45:28.540410    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:28.540437    6753 ssh_runner.go:195] Run: cat /version.json
	I0422 04:45:28.540453    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHHostname
	I0422 04:45:28.540519    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:28.540543    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHPort
	I0422 04:45:28.540623    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:28.540645    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHKeyPath
	I0422 04:45:28.540716    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:28.540733    6753 main.go:141] libmachine: (test-preload-194000) Calling .GetSSHUsername
	I0422 04:45:28.540792    6753 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/id_rsa Username:docker}
	I0422 04:45:28.540809    6753 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/test-preload-194000/id_rsa Username:docker}
	I0422 04:45:28.615342    6753 ssh_runner.go:195] Run: systemctl --version
	I0422 04:45:28.619977    6753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 04:45:28.624069    6753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 04:45:28.624117    6753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 04:45:28.637842    6753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 04:45:28.637853    6753 start.go:494] detecting cgroup driver to use...
	I0422 04:45:28.637959    6753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:45:28.652835    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0422 04:45:28.661706    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 04:45:28.670653    6753 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 04:45:28.670701    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 04:45:28.679634    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:45:28.688574    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 04:45:28.697318    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 04:45:28.706241    6753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 04:45:28.715351    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 04:45:28.724278    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 04:45:28.733156    6753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 04:45:28.742080    6753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 04:45:28.750167    6753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 04:45:28.758179    6753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:45:28.858202    6753 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 04:45:28.876684    6753 start.go:494] detecting cgroup driver to use...
	I0422 04:45:28.876779    6753 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 04:45:28.896715    6753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:45:28.909779    6753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 04:45:28.927123    6753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 04:45:28.939062    6753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:45:28.949990    6753 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0422 04:45:28.974560    6753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 04:45:28.986120    6753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 04:45:29.000623    6753 ssh_runner.go:195] Run: which cri-dockerd
	I0422 04:45:29.003565    6753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 04:45:29.011535    6753 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 04:45:29.024940    6753 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 04:45:29.134332    6753 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 04:45:29.235957    6753 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 04:45:29.236026    6753 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 04:45:29.258914    6753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 04:45:29.353751    6753 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 04:46:30.403900    6753 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.048292553s)
	I0422 04:46:30.403963    6753 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0422 04:46:30.438659    6753 out.go:177] 
	W0422 04:46:30.460421    6753 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 22 11:45:26 test-preload-194000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:45:26 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:26.307944767Z" level=info msg="Starting up"
	Apr 22 11:45:26 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:26.308551831Z" level=info msg="containerd not running, starting managed containerd"
	Apr 22 11:45:26 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:26.311241974Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.328777482Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343166274Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343242395Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343308344Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343344473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343500257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343549445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343677121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343721159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343752585Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343780561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343933748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.344166066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345753104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345802305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345937180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345979078Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.346120483Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.346171649Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.346202812Z" level=info msg="metadata content store policy set" policy=shared
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349167553Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349228769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349266835Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349306374Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349346306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349462435Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349678911Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349770882Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349820745Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349858266Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349889536Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349923343Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349953967Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349984656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350019104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350050288Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350084645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350117434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350155010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350186923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350216811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350246157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350275862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350305486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350334477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350365809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350395580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350429103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350457851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350486576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350530952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350576749Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350617053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350648396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350677329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350793892Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350841639Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350874522Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350903189Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350998857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351043213Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351072847Z" level=info msg="NRI interface is disabled by configuration."
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351250945Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351337022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351445219Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351511883Z" level=info msg="containerd successfully booted in 0.023765s"
	Apr 22 11:45:27 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:27.330335492Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 22 11:45:27 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:27.351376353Z" level=info msg="Loading containers: start."
	Apr 22 11:45:27 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:27.516304797Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.463406191Z" level=info msg="Loading containers: done."
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.470399795Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.470559706Z" level=info msg="Daemon has completed initialization"
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.488203219Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 22 11:45:28 test-preload-194000 systemd[1]: Started Docker Application Container Engine.
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.488331101Z" level=info msg="API listen on [::]:2376"
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.437504624Z" level=info msg="Processing signal 'terminated'"
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438383116Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438603998Z" level=info msg="Daemon shutdown complete"
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438663063Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438676239Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 22 11:45:29 test-preload-194000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 22 11:45:30 test-preload-194000 systemd[1]: docker.service: Deactivated successfully.
	Apr 22 11:45:30 test-preload-194000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 22 11:45:30 test-preload-194000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:45:30 test-preload-194000 dockerd[808]: time="2024-04-22T11:45:30.484083626Z" level=info msg="Starting up"
	Apr 22 11:46:30 test-preload-194000 dockerd[808]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 22 11:46:30 test-preload-194000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 22 11:46:30 test-preload-194000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 22 11:46:30 test-preload-194000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 22 11:45:26 test-preload-194000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:45:26 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:26.307944767Z" level=info msg="Starting up"
	Apr 22 11:45:26 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:26.308551831Z" level=info msg="containerd not running, starting managed containerd"
	Apr 22 11:45:26 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:26.311241974Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.328777482Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343166274Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343242395Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343308344Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343344473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343500257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343549445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343677121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343721159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343752585Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343780561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.343933748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.344166066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345753104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345802305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345937180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.345979078Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.346120483Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.346171649Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.346202812Z" level=info msg="metadata content store policy set" policy=shared
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349167553Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349228769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349266835Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349306374Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349346306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349462435Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349678911Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349770882Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349820745Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349858266Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349889536Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349923343Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349953967Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.349984656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350019104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350050288Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350084645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350117434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350155010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350186923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350216811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350246157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350275862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350305486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350334477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350365809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350395580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350429103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350457851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350486576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350530952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350576749Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350617053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350648396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350677329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350793892Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350841639Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350874522Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350903189Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.350998857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351043213Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351072847Z" level=info msg="NRI interface is disabled by configuration."
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351250945Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351337022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351445219Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 22 11:45:26 test-preload-194000 dockerd[518]: time="2024-04-22T11:45:26.351511883Z" level=info msg="containerd successfully booted in 0.023765s"
	Apr 22 11:45:27 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:27.330335492Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 22 11:45:27 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:27.351376353Z" level=info msg="Loading containers: start."
	Apr 22 11:45:27 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:27.516304797Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.463406191Z" level=info msg="Loading containers: done."
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.470399795Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.470559706Z" level=info msg="Daemon has completed initialization"
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.488203219Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 22 11:45:28 test-preload-194000 systemd[1]: Started Docker Application Container Engine.
	Apr 22 11:45:28 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:28.488331101Z" level=info msg="API listen on [::]:2376"
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.437504624Z" level=info msg="Processing signal 'terminated'"
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438383116Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438603998Z" level=info msg="Daemon shutdown complete"
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438663063Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 22 11:45:29 test-preload-194000 dockerd[512]: time="2024-04-22T11:45:29.438676239Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 22 11:45:29 test-preload-194000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 22 11:45:30 test-preload-194000 systemd[1]: docker.service: Deactivated successfully.
	Apr 22 11:45:30 test-preload-194000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 22 11:45:30 test-preload-194000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 11:45:30 test-preload-194000 dockerd[808]: time="2024-04-22T11:45:30.484083626Z" level=info msg="Starting up"
	Apr 22 11:46:30 test-preload-194000 dockerd[808]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 22 11:46:30 test-preload-194000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 22 11:46:30 test-preload-194000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 22 11:46:30 test-preload-194000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0422 04:46:30.460527    6753 out.go:239] * 
	* 
	W0422 04:46:30.461454    6753 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 04:46:30.523105    6753 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-darwin-amd64 start -p test-preload-194000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit  failed: exit status 90
panic.go:626: *** TestPreload FAILED at 2024-04-22 04:46:30.590908 -0700 PDT m=+4172.263559730
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-194000 -n test-preload-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-194000 -n test-preload-194000: exit status 6 (154.60698ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 04:46:30.733306    6842 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-194000" does not appear in /Users/jenkins/minikube-integration/18711-1033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-194000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-194000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-194000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-194000: (5.275025003s)
--- FAIL: TestPreload (229.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-654000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0
E0422 05:19:18.600062    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:19:53.290229    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 05:20:06.098753    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 05:20:28.520615    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:20:32.469459    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-654000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0: exit status 90 (1m16.54888964s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-654000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "default-k8s-diff-port-654000" primary control-plane node in "default-k8s-diff-port-654000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 05:19:18.562760   11560 out.go:291] Setting OutFile to fd 1 ...
	I0422 05:19:18.562982   11560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 05:19:18.562988   11560 out.go:304] Setting ErrFile to fd 2...
	I0422 05:19:18.562992   11560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 05:19:18.563181   11560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 05:19:18.564894   11560 out.go:298] Setting JSON to false
	I0422 05:19:18.589611   11560 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6528,"bootTime":1713781830,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 05:19:18.589728   11560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 05:19:18.611211   11560 out.go:177] * [default-k8s-diff-port-654000] minikube v1.33.0 on Darwin 14.4.1
	I0422 05:19:18.678038   11560 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 05:19:18.669059   11560 notify.go:220] Checking for updates...
	I0422 05:19:18.727047   11560 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 05:19:18.775165   11560 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 05:19:18.822011   11560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 05:19:18.880972   11560 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	I0422 05:19:18.922924   11560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 05:19:18.944448   11560 config.go:182] Loaded profile config "embed-certs-596000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 05:19:18.944538   11560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 05:19:18.973996   11560 out.go:177] * Using the hyperkit driver based on user configuration
	I0422 05:19:19.016082   11560 start.go:297] selected driver: hyperkit
	I0422 05:19:19.016100   11560 start.go:901] validating driver "hyperkit" against <nil>
	I0422 05:19:19.016115   11560 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 05:19:19.019185   11560 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 05:19:19.019308   11560 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18711-1033/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0422 05:19:19.027891   11560 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0422 05:19:19.032481   11560 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 05:19:19.032547   11560 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0422 05:19:19.032590   11560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 05:19:19.032838   11560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 05:19:19.032930   11560 cni.go:84] Creating CNI manager for ""
	I0422 05:19:19.032946   11560 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 05:19:19.032969   11560 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 05:19:19.033059   11560 start.go:340] cluster config:
	{Name:default-k8s-diff-port-654000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 05:19:19.033151   11560 iso.go:125] acquiring lock: {Name:mk174d786084574fba345b763762a2b8adb514c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 05:19:19.054031   11560 out.go:177] * Starting "default-k8s-diff-port-654000" primary control-plane node in "default-k8s-diff-port-654000" cluster
	I0422 05:19:19.074945   11560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 05:19:19.074981   11560 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0422 05:19:19.074996   11560 cache.go:56] Caching tarball of preloaded images
	I0422 05:19:19.075107   11560 preload.go:173] Found /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0422 05:19:19.075116   11560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 05:19:19.075187   11560 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/default-k8s-diff-port-654000/config.json ...
	I0422 05:19:19.075204   11560 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/default-k8s-diff-port-654000/config.json: {Name:mk26b36e516a09c3438bbf648a9aec9a105a43a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 05:19:19.075548   11560 start.go:360] acquireMachinesLock for default-k8s-diff-port-654000: {Name:mke81a6cfc4bf5ce8e1de7ad51be0d2fed5c5582 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 05:19:19.075606   11560 start.go:364] duration metric: took 44.79µs to acquireMachinesLock for "default-k8s-diff-port-654000"
	I0422 05:19:19.075635   11560 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.0 ClusterName:default-k8s-diff-port-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0422 05:19:19.075680   11560 start.go:125] createHost starting for "" (driver="hyperkit")
	I0422 05:19:19.118033   11560 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 05:19:19.118192   11560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 05:19:19.118236   11560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 05:19:19.127083   11560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57084
	I0422 05:19:19.127439   11560 main.go:141] libmachine: () Calling .GetVersion
	I0422 05:19:19.127855   11560 main.go:141] libmachine: Using API Version  1
	I0422 05:19:19.127864   11560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 05:19:19.128086   11560 main.go:141] libmachine: () Calling .GetMachineName
	I0422 05:19:19.128368   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetMachineName
	I0422 05:19:19.128566   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:19.128675   11560 start.go:159] libmachine.API.Create for "default-k8s-diff-port-654000" (driver="hyperkit")
	I0422 05:19:19.128698   11560 client.go:168] LocalClient.Create starting
	I0422 05:19:19.128734   11560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem
	I0422 05:19:19.128794   11560 main.go:141] libmachine: Decoding PEM data...
	I0422 05:19:19.128812   11560 main.go:141] libmachine: Parsing certificate...
	I0422 05:19:19.128870   11560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem
	I0422 05:19:19.128908   11560 main.go:141] libmachine: Decoding PEM data...
	I0422 05:19:19.128920   11560 main.go:141] libmachine: Parsing certificate...
	I0422 05:19:19.128932   11560 main.go:141] libmachine: Running pre-create checks...
	I0422 05:19:19.128943   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .PreCreateCheck
	I0422 05:19:19.129023   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:19.129170   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetConfigRaw
	I0422 05:19:19.129620   11560 main.go:141] libmachine: Creating machine...
	I0422 05:19:19.129629   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .Create
	I0422 05:19:19.129702   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:19.129831   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | I0422 05:19:19.129698   11568 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18711-1033/.minikube
	I0422 05:19:19.129884   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Downloading /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18711-1033/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 05:19:19.328511   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | I0422 05:19:19.328447   11568 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/id_rsa...
	I0422 05:19:19.456264   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | I0422 05:19:19.456207   11568 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/default-k8s-diff-port-654000.rawdisk...
	I0422 05:19:19.456289   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Writing magic tar header
	I0422 05:19:19.456342   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Writing SSH key tar header
	I0422 05:19:19.456699   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | I0422 05:19:19.456617   11568 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000 ...
	I0422 05:19:19.834201   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:19.834234   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/hyperkit.pid
	I0422 05:19:19.834284   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Using UUID fc58519d-3f53-4768-b928-ff447fe04e82
	I0422 05:19:19.862378   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Generated MAC 3e:3:77:57:f4:fe
	I0422 05:19:19.862397   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-654000
	I0422 05:19:19.862440   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fc58519d-3f53-4768-b928-ff447fe04e82", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0422 05:19:19.862469   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fc58519d-3f53-4768-b928-ff447fe04e82", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0422 05:19:19.862518   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fc58519d-3f53-4768-b928-ff447fe04e82", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/default-k8s-diff-port-654000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18711-1033/
.minikube/machines/default-k8s-diff-port-654000/bzimage,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-654000"}
	I0422 05:19:19.862571   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fc58519d-3f53-4768-b928-ff447fe04e82 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/default-k8s-diff-port-654000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/tty,log=/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/console-ring -f kexec,/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/bzimage,/Users
/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-654000"
	I0422 05:19:19.862583   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0422 05:19:19.867339   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 DEBUG: hyperkit: Pid is 11569
	I0422 05:19:19.868009   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Attempt 0
	I0422 05:19:19.868025   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:19.868139   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | hyperkit pid from json: 11569
	I0422 05:19:19.869582   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Searching for 3e:3:77:57:f4:fe in /var/db/dhcpd_leases ...
	I0422 05:19:19.869733   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Found 45 entries in /var/db/dhcpd_leases!
	I0422 05:19:19.869753   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:ea:7a:95:74:5a:1d ID:1,ea:7a:95:74:5a:1d Lease:0x6627a724}
	I0422 05:19:19.869761   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:5a:df:7c:1f:88:c3 ID:1,5a:df:7c:1f:88:c3 Lease:0x6627a5a2}
	I0422 05:19:19.869771   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:b2:62:e4:47:c9:fd ID:1,b2:62:e4:47:c9:fd Lease:0x6627a5b3}
	I0422 05:19:19.869778   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:1a:21:11:28:91:1d ID:1,1a:21:11:28:91:1d Lease:0x6627a4ea}
	I0422 05:19:19.869787   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:3e:4c:6f:b5:e1:1b ID:1,3e:4c:6f:b5:e1:1b Lease:0x6627a4a6}
	I0422 05:19:19.869797   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:f6:72:ab:7f:9:5c ID:1,f6:72:ab:7f:9:5c Lease:0x6627a495}
	I0422 05:19:19.869831   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:a:a2:68:35:3:b4 ID:1,a:a2:68:35:3:b4 Lease:0x6627a442}
	I0422 05:19:19.869849   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e2:3b:6b:e2:b:17 ID:1,e2:3b:6b:e2:b:17 Lease:0x6627a433}
	I0422 05:19:19.869933   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:5e:ff:16:e:9e:3b ID:1,5e:ff:16:e:9e:3b Lease:0x6627a3c4}
	I0422 05:19:19.869979   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:72:b:8d:a8:56:c8 ID:1,72:b:8d:a8:56:c8 Lease:0x6627a371}
	I0422 05:19:19.869994   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:e:29:e8:52:e4:99 ID:1,e:29:e8:52:e4:99 Lease:0x6627a304}
	I0422 05:19:19.870008   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:aa:3b:f1:3e:c4:e9 ID:1,aa:3b:f1:3e:c4:e9 Lease:0x6627a2f4}
	I0422 05:19:19.870022   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:de:18:67:74:8:5c ID:1,de:18:67:74:8:5c Lease:0x66265169}
	I0422 05:19:19.870038   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:44:29:50:62:d ID:1,96:44:29:50:62:d Lease:0x66265129}
	I0422 05:19:19.870057   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:82:66:c4:de:66:d ID:1,82:66:c4:de:66:d Lease:0x6627a264}
	I0422 05:19:19.870075   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:9e:2c:73:68:4d:1c ID:1,9e:2c:73:68:4d:1c Lease:0x6627a24e}
	I0422 05:19:19.870087   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:da:f:19:fc:83:2c ID:1,da:f:19:fc:83:2c Lease:0x6626508f}
	I0422 05:19:19.870097   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:66:a8:2a:26:ef:10 ID:1,66:a8:2a:26:ef:10 Lease:0x6627a14a}
	I0422 05:19:19.870118   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:86:cd:f0:44:ed:aa ID:1,86:cd:f0:44:ed:aa Lease:0x66264fb2}
	I0422 05:19:19.870139   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:1e:de:59:d9:7d:98 ID:1,1e:de:59:d9:7d:98 Lease:0x6627a0f7}
	I0422 05:19:19.870154   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:8b:c2:eb:1d:a2 ID:1,7a:8b:c2:eb:1d:a2 Lease:0x6627a0e9}
	I0422 05:19:19.870164   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ea:dd:32:b:96:af ID:1,ea:dd:32:b:96:af Lease:0x6627a0cb}
	I0422 05:19:19.870172   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:be:9c:ec:19:b1:b0 ID:1,be:9c:ec:19:b1:b0 Lease:0x6627a0a1}
	I0422 05:19:19.870181   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:32:b:ae:cb:1a:59 ID:1,32:b:ae:cb:1a:59 Lease:0x6627a088}
	I0422 05:19:19.870210   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2:13:61:e0:74:6f ID:1,2:13:61:e0:74:6f Lease:0x6627a015}
	I0422 05:19:19.870228   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:88:1b:65:7f:65 ID:1,fe:88:1b:65:7f:65 Lease:0x66279fa5}
	I0422 05:19:19.870248   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:12:fb:ad:2e:1d:5a ID:1,12:fb:ad:2e:1d:5a Lease:0x66279f3b}
	I0422 05:19:19.870261   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:12:95:d2:1e:3b:84 ID:1,12:95:d2:1e:3b:84 Lease:0x66279e3e}
	I0422 05:19:19.870270   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 05:19:19.870279   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279dd4}
	I0422 05:19:19.870301   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279dab}
	I0422 05:19:19.870329   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:62:93:88:e8:f6:46 ID:1,62:93:88:e8:f6:46 Lease:0x662649e4}
	I0422 05:19:19.870350   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8a:f:cc:40:b7:4e ID:1,8a:f:cc:40:b7:4e Lease:0x662649cb}
	I0422 05:19:19.870363   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:72:ab:cd:48:d2:b ID:1,72:ab:cd:48:d2:b Lease:0x66279af9}
	I0422 05:19:19.870371   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:56:67:b0:d1:f4:71 ID:1,56:67:b0:d1:f4:71 Lease:0x66279ad0}
	I0422 05:19:19.870381   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:83:1b:77:de:61 ID:1,a2:83:1b:77:de:61 Lease:0x66279a71}
	I0422 05:19:19.870397   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:22:96:cf:cf:8 ID:1,ce:22:96:cf:cf:8 Lease:0x66279a43}
	I0422 05:19:19.870415   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:26:70:e3:26:68:f0 ID:1,26:70:e3:26:68:f0 Lease:0x662648b7}
	I0422 05:19:19.870456   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:96:fd:92:82:5b:dc ID:1,96:fd:92:82:5b:dc Lease:0x6627935e}
	I0422 05:19:19.870492   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:c6:dd:3d:cf:f0:d2 ID:1,c6:dd:3d:cf:f0:d2 Lease:0x662648ae}
	I0422 05:19:19.870517   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:6:29:7d:cb:c:52 ID:1,6:29:7d:cb:c:52 Lease:0x66279261}
	I0422 05:19:19.870536   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:7e:d6:67:50:b9:d1 ID:1,7e:d6:67:50:b9:d1 Lease:0x66279159}
	I0422 05:19:19.870551   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:39:f9:6a:26:1b ID:1,22:39:f9:6a:26:1b Lease:0x66263f35}
	I0422 05:19:19.870566   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:8e:c0:48:d9:4c:87 ID:1,8e:c0:48:d9:4c:87 Lease:0x66278f74}
	I0422 05:19:19.870585   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x66278e11}
	I0422 05:19:19.874637   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0422 05:19:19.883959   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0422 05:19:19.884973   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 05:19:19.885001   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 05:19:19.885016   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 05:19:19.885028   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 05:19:20.307210   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0422 05:19:20.307228   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0422 05:19:20.422935   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0422 05:19:20.422956   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0422 05:19:20.422965   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0422 05:19:20.422971   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0422 05:19:20.423803   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0422 05:19:20.423811   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0422 05:19:21.870587   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Attempt 1
	I0422 05:19:21.870603   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:21.870707   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | hyperkit pid from json: 11569
	I0422 05:19:21.871686   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Searching for 3e:3:77:57:f4:fe in /var/db/dhcpd_leases ...
	I0422 05:19:21.871769   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Found 45 entries in /var/db/dhcpd_leases!
	I0422 05:19:21.871784   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:ea:7a:95:74:5a:1d ID:1,ea:7a:95:74:5a:1d Lease:0x6627a724}
	I0422 05:19:21.871796   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:5a:df:7c:1f:88:c3 ID:1,5a:df:7c:1f:88:c3 Lease:0x6627a5a2}
	I0422 05:19:21.871811   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:b2:62:e4:47:c9:fd ID:1,b2:62:e4:47:c9:fd Lease:0x6627a5b3}
	I0422 05:19:21.871828   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:1a:21:11:28:91:1d ID:1,1a:21:11:28:91:1d Lease:0x6627a4ea}
	I0422 05:19:21.871841   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:3e:4c:6f:b5:e1:1b ID:1,3e:4c:6f:b5:e1:1b Lease:0x6627a4a6}
	I0422 05:19:21.871851   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:f6:72:ab:7f:9:5c ID:1,f6:72:ab:7f:9:5c Lease:0x6627a495}
	I0422 05:19:21.871861   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:a:a2:68:35:3:b4 ID:1,a:a2:68:35:3:b4 Lease:0x6627a442}
	I0422 05:19:21.871876   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e2:3b:6b:e2:b:17 ID:1,e2:3b:6b:e2:b:17 Lease:0x6627a433}
	I0422 05:19:21.871888   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:5e:ff:16:e:9e:3b ID:1,5e:ff:16:e:9e:3b Lease:0x6627a3c4}
	I0422 05:19:21.871903   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:72:b:8d:a8:56:c8 ID:1,72:b:8d:a8:56:c8 Lease:0x6627a371}
	I0422 05:19:21.871924   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:e:29:e8:52:e4:99 ID:1,e:29:e8:52:e4:99 Lease:0x6627a304}
	I0422 05:19:21.871943   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:aa:3b:f1:3e:c4:e9 ID:1,aa:3b:f1:3e:c4:e9 Lease:0x6627a2f4}
	I0422 05:19:21.871957   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:de:18:67:74:8:5c ID:1,de:18:67:74:8:5c Lease:0x66265169}
	I0422 05:19:21.871966   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:44:29:50:62:d ID:1,96:44:29:50:62:d Lease:0x66265129}
	I0422 05:19:21.871976   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:82:66:c4:de:66:d ID:1,82:66:c4:de:66:d Lease:0x6627a264}
	I0422 05:19:21.871986   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:9e:2c:73:68:4d:1c ID:1,9e:2c:73:68:4d:1c Lease:0x6627a24e}
	I0422 05:19:21.871995   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:da:f:19:fc:83:2c ID:1,da:f:19:fc:83:2c Lease:0x6626508f}
	I0422 05:19:21.872004   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:66:a8:2a:26:ef:10 ID:1,66:a8:2a:26:ef:10 Lease:0x6627a14a}
	I0422 05:19:21.872013   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:86:cd:f0:44:ed:aa ID:1,86:cd:f0:44:ed:aa Lease:0x66264fb2}
	I0422 05:19:21.872026   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:1e:de:59:d9:7d:98 ID:1,1e:de:59:d9:7d:98 Lease:0x6627a0f7}
	I0422 05:19:21.872037   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:8b:c2:eb:1d:a2 ID:1,7a:8b:c2:eb:1d:a2 Lease:0x6627a0e9}
	I0422 05:19:21.872046   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ea:dd:32:b:96:af ID:1,ea:dd:32:b:96:af Lease:0x6627a0cb}
	I0422 05:19:21.872055   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:be:9c:ec:19:b1:b0 ID:1,be:9c:ec:19:b1:b0 Lease:0x6627a0a1}
	I0422 05:19:21.872067   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:32:b:ae:cb:1a:59 ID:1,32:b:ae:cb:1a:59 Lease:0x6627a088}
	I0422 05:19:21.872077   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2:13:61:e0:74:6f ID:1,2:13:61:e0:74:6f Lease:0x6627a015}
	I0422 05:19:21.872086   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:88:1b:65:7f:65 ID:1,fe:88:1b:65:7f:65 Lease:0x66279fa5}
	I0422 05:19:21.872096   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:12:fb:ad:2e:1d:5a ID:1,12:fb:ad:2e:1d:5a Lease:0x66279f3b}
	I0422 05:19:21.872109   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:12:95:d2:1e:3b:84 ID:1,12:95:d2:1e:3b:84 Lease:0x66279e3e}
	I0422 05:19:21.872122   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 05:19:21.872134   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279dd4}
	I0422 05:19:21.872152   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279dab}
	I0422 05:19:21.872162   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:62:93:88:e8:f6:46 ID:1,62:93:88:e8:f6:46 Lease:0x662649e4}
	I0422 05:19:21.872180   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8a:f:cc:40:b7:4e ID:1,8a:f:cc:40:b7:4e Lease:0x662649cb}
	I0422 05:19:21.872201   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:72:ab:cd:48:d2:b ID:1,72:ab:cd:48:d2:b Lease:0x66279af9}
	I0422 05:19:21.872212   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:56:67:b0:d1:f4:71 ID:1,56:67:b0:d1:f4:71 Lease:0x66279ad0}
	I0422 05:19:21.872223   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:83:1b:77:de:61 ID:1,a2:83:1b:77:de:61 Lease:0x66279a71}
	I0422 05:19:21.872231   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:22:96:cf:cf:8 ID:1,ce:22:96:cf:cf:8 Lease:0x66279a43}
	I0422 05:19:21.872243   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:26:70:e3:26:68:f0 ID:1,26:70:e3:26:68:f0 Lease:0x662648b7}
	I0422 05:19:21.872254   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:96:fd:92:82:5b:dc ID:1,96:fd:92:82:5b:dc Lease:0x6627935e}
	I0422 05:19:21.872262   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:c6:dd:3d:cf:f0:d2 ID:1,c6:dd:3d:cf:f0:d2 Lease:0x662648ae}
	I0422 05:19:21.872270   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:6:29:7d:cb:c:52 ID:1,6:29:7d:cb:c:52 Lease:0x66279261}
	I0422 05:19:21.872277   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:7e:d6:67:50:b9:d1 ID:1,7e:d6:67:50:b9:d1 Lease:0x66279159}
	I0422 05:19:21.872290   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:39:f9:6a:26:1b ID:1,22:39:f9:6a:26:1b Lease:0x66263f35}
	I0422 05:19:21.872299   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:8e:c0:48:d9:4c:87 ID:1,8e:c0:48:d9:4c:87 Lease:0x66278f74}
	I0422 05:19:21.872308   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x66278e11}
	I0422 05:19:23.872593   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Attempt 2
	I0422 05:19:23.872647   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:23.872662   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | hyperkit pid from json: 11569
	I0422 05:19:23.873552   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Searching for 3e:3:77:57:f4:fe in /var/db/dhcpd_leases ...
	I0422 05:19:23.873643   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Found 45 entries in /var/db/dhcpd_leases!
	I0422 05:19:23.873678   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:ea:7a:95:74:5a:1d ID:1,ea:7a:95:74:5a:1d Lease:0x6627a724}
	I0422 05:19:23.873691   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:5a:df:7c:1f:88:c3 ID:1,5a:df:7c:1f:88:c3 Lease:0x6627a5a2}
	I0422 05:19:23.873699   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:b2:62:e4:47:c9:fd ID:1,b2:62:e4:47:c9:fd Lease:0x6627a5b3}
	I0422 05:19:23.873706   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:1a:21:11:28:91:1d ID:1,1a:21:11:28:91:1d Lease:0x6627a4ea}
	I0422 05:19:23.873714   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:3e:4c:6f:b5:e1:1b ID:1,3e:4c:6f:b5:e1:1b Lease:0x6627a4a6}
	I0422 05:19:23.873724   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:f6:72:ab:7f:9:5c ID:1,f6:72:ab:7f:9:5c Lease:0x6627a495}
	I0422 05:19:23.873738   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:a:a2:68:35:3:b4 ID:1,a:a2:68:35:3:b4 Lease:0x6627a442}
	I0422 05:19:23.873748   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e2:3b:6b:e2:b:17 ID:1,e2:3b:6b:e2:b:17 Lease:0x6627a433}
	I0422 05:19:23.873756   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:5e:ff:16:e:9e:3b ID:1,5e:ff:16:e:9e:3b Lease:0x6627a3c4}
	I0422 05:19:23.873763   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:72:b:8d:a8:56:c8 ID:1,72:b:8d:a8:56:c8 Lease:0x6627a371}
	I0422 05:19:23.873770   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:e:29:e8:52:e4:99 ID:1,e:29:e8:52:e4:99 Lease:0x6627a304}
	I0422 05:19:23.873778   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:aa:3b:f1:3e:c4:e9 ID:1,aa:3b:f1:3e:c4:e9 Lease:0x6627a2f4}
	I0422 05:19:23.873785   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:de:18:67:74:8:5c ID:1,de:18:67:74:8:5c Lease:0x66265169}
	I0422 05:19:23.873796   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:44:29:50:62:d ID:1,96:44:29:50:62:d Lease:0x66265129}
	I0422 05:19:23.873804   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:82:66:c4:de:66:d ID:1,82:66:c4:de:66:d Lease:0x6627a264}
	I0422 05:19:23.873810   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:9e:2c:73:68:4d:1c ID:1,9e:2c:73:68:4d:1c Lease:0x6627a24e}
	I0422 05:19:23.873818   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:da:f:19:fc:83:2c ID:1,da:f:19:fc:83:2c Lease:0x6626508f}
	I0422 05:19:23.873826   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:66:a8:2a:26:ef:10 ID:1,66:a8:2a:26:ef:10 Lease:0x6627a14a}
	I0422 05:19:23.873834   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:86:cd:f0:44:ed:aa ID:1,86:cd:f0:44:ed:aa Lease:0x66264fb2}
	I0422 05:19:23.873870   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:1e:de:59:d9:7d:98 ID:1,1e:de:59:d9:7d:98 Lease:0x6627a0f7}
	I0422 05:19:23.873882   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:8b:c2:eb:1d:a2 ID:1,7a:8b:c2:eb:1d:a2 Lease:0x6627a0e9}
	I0422 05:19:23.873890   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ea:dd:32:b:96:af ID:1,ea:dd:32:b:96:af Lease:0x6627a0cb}
	I0422 05:19:23.873898   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:be:9c:ec:19:b1:b0 ID:1,be:9c:ec:19:b1:b0 Lease:0x6627a0a1}
	I0422 05:19:23.873956   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:32:b:ae:cb:1a:59 ID:1,32:b:ae:cb:1a:59 Lease:0x6627a088}
	I0422 05:19:23.873984   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2:13:61:e0:74:6f ID:1,2:13:61:e0:74:6f Lease:0x6627a015}
	I0422 05:19:23.873991   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:88:1b:65:7f:65 ID:1,fe:88:1b:65:7f:65 Lease:0x66279fa5}
	I0422 05:19:23.873997   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:12:fb:ad:2e:1d:5a ID:1,12:fb:ad:2e:1d:5a Lease:0x66279f3b}
	I0422 05:19:23.874023   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:12:95:d2:1e:3b:84 ID:1,12:95:d2:1e:3b:84 Lease:0x66279e3e}
	I0422 05:19:23.874031   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 05:19:23.874041   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279dd4}
	I0422 05:19:23.874075   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279dab}
	I0422 05:19:23.874100   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:62:93:88:e8:f6:46 ID:1,62:93:88:e8:f6:46 Lease:0x662649e4}
	I0422 05:19:23.874128   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8a:f:cc:40:b7:4e ID:1,8a:f:cc:40:b7:4e Lease:0x662649cb}
	I0422 05:19:23.874134   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:72:ab:cd:48:d2:b ID:1,72:ab:cd:48:d2:b Lease:0x66279af9}
	I0422 05:19:23.874172   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:56:67:b0:d1:f4:71 ID:1,56:67:b0:d1:f4:71 Lease:0x66279ad0}
	I0422 05:19:23.874208   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:83:1b:77:de:61 ID:1,a2:83:1b:77:de:61 Lease:0x66279a71}
	I0422 05:19:23.874216   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:22:96:cf:cf:8 ID:1,ce:22:96:cf:cf:8 Lease:0x66279a43}
	I0422 05:19:23.874225   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:26:70:e3:26:68:f0 ID:1,26:70:e3:26:68:f0 Lease:0x662648b7}
	I0422 05:19:23.874231   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:96:fd:92:82:5b:dc ID:1,96:fd:92:82:5b:dc Lease:0x6627935e}
	I0422 05:19:23.874240   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:c6:dd:3d:cf:f0:d2 ID:1,c6:dd:3d:cf:f0:d2 Lease:0x662648ae}
	I0422 05:19:23.874250   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:6:29:7d:cb:c:52 ID:1,6:29:7d:cb:c:52 Lease:0x66279261}
	I0422 05:19:23.874272   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:7e:d6:67:50:b9:d1 ID:1,7e:d6:67:50:b9:d1 Lease:0x66279159}
	I0422 05:19:23.874279   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:39:f9:6a:26:1b ID:1,22:39:f9:6a:26:1b Lease:0x66263f35}
	I0422 05:19:23.874285   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:8e:c0:48:d9:4c:87 ID:1,8e:c0:48:d9:4c:87 Lease:0x66278f74}
	I0422 05:19:23.874301   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x66278e11}
	I0422 05:19:25.864405   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0422 05:19:25.864437   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0422 05:19:25.864444   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0422 05:19:25.874454   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Attempt 3
	I0422 05:19:25.874465   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:25.874554   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | hyperkit pid from json: 11569
	I0422 05:19:25.875371   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Searching for 3e:3:77:57:f4:fe in /var/db/dhcpd_leases ...
	I0422 05:19:25.875466   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Found 45 entries in /var/db/dhcpd_leases!
	I0422 05:19:25.875475   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:ea:7a:95:74:5a:1d ID:1,ea:7a:95:74:5a:1d Lease:0x6627a724}
	I0422 05:19:25.875484   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:5a:df:7c:1f:88:c3 ID:1,5a:df:7c:1f:88:c3 Lease:0x6627a5a2}
	I0422 05:19:25.875494   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:b2:62:e4:47:c9:fd ID:1,b2:62:e4:47:c9:fd Lease:0x6627a5b3}
	I0422 05:19:25.875502   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:1a:21:11:28:91:1d ID:1,1a:21:11:28:91:1d Lease:0x6627a4ea}
	I0422 05:19:25.875509   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:3e:4c:6f:b5:e1:1b ID:1,3e:4c:6f:b5:e1:1b Lease:0x6627a4a6}
	I0422 05:19:25.875516   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:f6:72:ab:7f:9:5c ID:1,f6:72:ab:7f:9:5c Lease:0x6627a495}
	I0422 05:19:25.875523   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:a:a2:68:35:3:b4 ID:1,a:a2:68:35:3:b4 Lease:0x6627a442}
	I0422 05:19:25.875528   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e2:3b:6b:e2:b:17 ID:1,e2:3b:6b:e2:b:17 Lease:0x6627a433}
	I0422 05:19:25.875535   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:5e:ff:16:e:9e:3b ID:1,5e:ff:16:e:9e:3b Lease:0x6627a3c4}
	I0422 05:19:25.875545   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:72:b:8d:a8:56:c8 ID:1,72:b:8d:a8:56:c8 Lease:0x6627a371}
	I0422 05:19:25.875552   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:e:29:e8:52:e4:99 ID:1,e:29:e8:52:e4:99 Lease:0x6627a304}
	I0422 05:19:25.875559   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:aa:3b:f1:3e:c4:e9 ID:1,aa:3b:f1:3e:c4:e9 Lease:0x6627a2f4}
	I0422 05:19:25.875574   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:de:18:67:74:8:5c ID:1,de:18:67:74:8:5c Lease:0x66265169}
	I0422 05:19:25.875583   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:44:29:50:62:d ID:1,96:44:29:50:62:d Lease:0x66265129}
	I0422 05:19:25.875591   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:82:66:c4:de:66:d ID:1,82:66:c4:de:66:d Lease:0x6627a264}
	I0422 05:19:25.875597   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:9e:2c:73:68:4d:1c ID:1,9e:2c:73:68:4d:1c Lease:0x6627a24e}
	I0422 05:19:25.875604   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:da:f:19:fc:83:2c ID:1,da:f:19:fc:83:2c Lease:0x6626508f}
	I0422 05:19:25.875613   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:66:a8:2a:26:ef:10 ID:1,66:a8:2a:26:ef:10 Lease:0x6627a14a}
	I0422 05:19:25.875635   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:86:cd:f0:44:ed:aa ID:1,86:cd:f0:44:ed:aa Lease:0x66264fb2}
	I0422 05:19:25.875646   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:1e:de:59:d9:7d:98 ID:1,1e:de:59:d9:7d:98 Lease:0x6627a0f7}
	I0422 05:19:25.875654   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:8b:c2:eb:1d:a2 ID:1,7a:8b:c2:eb:1d:a2 Lease:0x6627a0e9}
	I0422 05:19:25.875660   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ea:dd:32:b:96:af ID:1,ea:dd:32:b:96:af Lease:0x6627a0cb}
	I0422 05:19:25.875667   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:be:9c:ec:19:b1:b0 ID:1,be:9c:ec:19:b1:b0 Lease:0x6627a0a1}
	I0422 05:19:25.875674   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:32:b:ae:cb:1a:59 ID:1,32:b:ae:cb:1a:59 Lease:0x6627a088}
	I0422 05:19:25.875682   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2:13:61:e0:74:6f ID:1,2:13:61:e0:74:6f Lease:0x6627a015}
	I0422 05:19:25.875694   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:88:1b:65:7f:65 ID:1,fe:88:1b:65:7f:65 Lease:0x66279fa5}
	I0422 05:19:25.875702   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:12:fb:ad:2e:1d:5a ID:1,12:fb:ad:2e:1d:5a Lease:0x66279f3b}
	I0422 05:19:25.875710   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:12:95:d2:1e:3b:84 ID:1,12:95:d2:1e:3b:84 Lease:0x66279e3e}
	I0422 05:19:25.875718   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 05:19:25.875726   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279dd4}
	I0422 05:19:25.875750   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279dab}
	I0422 05:19:25.875764   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:62:93:88:e8:f6:46 ID:1,62:93:88:e8:f6:46 Lease:0x662649e4}
	I0422 05:19:25.875773   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8a:f:cc:40:b7:4e ID:1,8a:f:cc:40:b7:4e Lease:0x662649cb}
	I0422 05:19:25.875782   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:72:ab:cd:48:d2:b ID:1,72:ab:cd:48:d2:b Lease:0x66279af9}
	I0422 05:19:25.875789   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:56:67:b0:d1:f4:71 ID:1,56:67:b0:d1:f4:71 Lease:0x66279ad0}
	I0422 05:19:25.875807   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:83:1b:77:de:61 ID:1,a2:83:1b:77:de:61 Lease:0x66279a71}
	I0422 05:19:25.875819   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:22:96:cf:cf:8 ID:1,ce:22:96:cf:cf:8 Lease:0x66279a43}
	I0422 05:19:25.875827   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:26:70:e3:26:68:f0 ID:1,26:70:e3:26:68:f0 Lease:0x662648b7}
	I0422 05:19:25.875835   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:96:fd:92:82:5b:dc ID:1,96:fd:92:82:5b:dc Lease:0x6627935e}
	I0422 05:19:25.875841   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:c6:dd:3d:cf:f0:d2 ID:1,c6:dd:3d:cf:f0:d2 Lease:0x662648ae}
	I0422 05:19:25.875855   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:6:29:7d:cb:c:52 ID:1,6:29:7d:cb:c:52 Lease:0x66279261}
	I0422 05:19:25.875867   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:7e:d6:67:50:b9:d1 ID:1,7e:d6:67:50:b9:d1 Lease:0x66279159}
	I0422 05:19:25.875878   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:39:f9:6a:26:1b ID:1,22:39:f9:6a:26:1b Lease:0x66263f35}
	I0422 05:19:25.875886   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:8e:c0:48:d9:4c:87 ID:1,8e:c0:48:d9:4c:87 Lease:0x66278f74}
	I0422 05:19:25.875895   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x66278e11}
	I0422 05:19:25.888574   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | 2024/04/22 05:19:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0422 05:19:27.876696   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Attempt 4
	I0422 05:19:27.876714   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:27.876813   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | hyperkit pid from json: 11569
	I0422 05:19:27.877636   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Searching for 3e:3:77:57:f4:fe in /var/db/dhcpd_leases ...
	I0422 05:19:27.877722   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Found 45 entries in /var/db/dhcpd_leases!
	I0422 05:19:27.877731   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:ea:7a:95:74:5a:1d ID:1,ea:7a:95:74:5a:1d Lease:0x6627a724}
	I0422 05:19:27.877740   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:5a:df:7c:1f:88:c3 ID:1,5a:df:7c:1f:88:c3 Lease:0x6627a5a2}
	I0422 05:19:27.877751   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:b2:62:e4:47:c9:fd ID:1,b2:62:e4:47:c9:fd Lease:0x6627a5b3}
	I0422 05:19:27.877765   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:1a:21:11:28:91:1d ID:1,1a:21:11:28:91:1d Lease:0x6627a4ea}
	I0422 05:19:27.877775   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:3e:4c:6f:b5:e1:1b ID:1,3e:4c:6f:b5:e1:1b Lease:0x6627a4a6}
	I0422 05:19:27.877785   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:f6:72:ab:7f:9:5c ID:1,f6:72:ab:7f:9:5c Lease:0x6627a495}
	I0422 05:19:27.877793   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:a:a2:68:35:3:b4 ID:1,a:a2:68:35:3:b4 Lease:0x6627a442}
	I0422 05:19:27.877799   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e2:3b:6b:e2:b:17 ID:1,e2:3b:6b:e2:b:17 Lease:0x6627a433}
	I0422 05:19:27.877842   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:5e:ff:16:e:9e:3b ID:1,5e:ff:16:e:9e:3b Lease:0x6627a3c4}
	I0422 05:19:27.877858   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:72:b:8d:a8:56:c8 ID:1,72:b:8d:a8:56:c8 Lease:0x6627a371}
	I0422 05:19:27.877874   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:e:29:e8:52:e4:99 ID:1,e:29:e8:52:e4:99 Lease:0x6627a304}
	I0422 05:19:27.877897   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:aa:3b:f1:3e:c4:e9 ID:1,aa:3b:f1:3e:c4:e9 Lease:0x6627a2f4}
	I0422 05:19:27.877905   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:de:18:67:74:8:5c ID:1,de:18:67:74:8:5c Lease:0x66265169}
	I0422 05:19:27.877911   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:44:29:50:62:d ID:1,96:44:29:50:62:d Lease:0x66265129}
	I0422 05:19:27.877919   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:82:66:c4:de:66:d ID:1,82:66:c4:de:66:d Lease:0x6627a264}
	I0422 05:19:27.877928   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:9e:2c:73:68:4d:1c ID:1,9e:2c:73:68:4d:1c Lease:0x6627a24e}
	I0422 05:19:27.877936   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:da:f:19:fc:83:2c ID:1,da:f:19:fc:83:2c Lease:0x6626508f}
	I0422 05:19:27.877944   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:66:a8:2a:26:ef:10 ID:1,66:a8:2a:26:ef:10 Lease:0x6627a14a}
	I0422 05:19:27.877976   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:86:cd:f0:44:ed:aa ID:1,86:cd:f0:44:ed:aa Lease:0x66264fb2}
	I0422 05:19:27.877987   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:1e:de:59:d9:7d:98 ID:1,1e:de:59:d9:7d:98 Lease:0x6627a0f7}
	I0422 05:19:27.877995   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:8b:c2:eb:1d:a2 ID:1,7a:8b:c2:eb:1d:a2 Lease:0x6627a0e9}
	I0422 05:19:27.878011   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ea:dd:32:b:96:af ID:1,ea:dd:32:b:96:af Lease:0x6627a0cb}
	I0422 05:19:27.878028   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:be:9c:ec:19:b1:b0 ID:1,be:9c:ec:19:b1:b0 Lease:0x6627a0a1}
	I0422 05:19:27.878047   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:32:b:ae:cb:1a:59 ID:1,32:b:ae:cb:1a:59 Lease:0x6627a088}
	I0422 05:19:27.878058   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2:13:61:e0:74:6f ID:1,2:13:61:e0:74:6f Lease:0x6627a015}
	I0422 05:19:27.878067   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:88:1b:65:7f:65 ID:1,fe:88:1b:65:7f:65 Lease:0x66279fa5}
	I0422 05:19:27.878077   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:12:fb:ad:2e:1d:5a ID:1,12:fb:ad:2e:1d:5a Lease:0x66279f3b}
	I0422 05:19:27.878085   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:12:95:d2:1e:3b:84 ID:1,12:95:d2:1e:3b:84 Lease:0x66279e3e}
	I0422 05:19:27.878093   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:33:e:18:56:49 ID:1,92:33:e:18:56:49 Lease:0x66264c0f}
	I0422 05:19:27.878101   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:e2:d0:5:63:30:40 ID:1,e2:d0:5:63:30:40 Lease:0x66279dd4}
	I0422 05:19:27.878113   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:3e:5c:84:88:5b:2b ID:1,3e:5c:84:88:5b:2b Lease:0x66279dab}
	I0422 05:19:27.878121   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:62:93:88:e8:f6:46 ID:1,62:93:88:e8:f6:46 Lease:0x662649e4}
	I0422 05:19:27.878128   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8a:f:cc:40:b7:4e ID:1,8a:f:cc:40:b7:4e Lease:0x662649cb}
	I0422 05:19:27.878136   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:72:ab:cd:48:d2:b ID:1,72:ab:cd:48:d2:b Lease:0x66279af9}
	I0422 05:19:27.878146   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:56:67:b0:d1:f4:71 ID:1,56:67:b0:d1:f4:71 Lease:0x66279ad0}
	I0422 05:19:27.878154   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:83:1b:77:de:61 ID:1,a2:83:1b:77:de:61 Lease:0x66279a71}
	I0422 05:19:27.878160   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:22:96:cf:cf:8 ID:1,ce:22:96:cf:cf:8 Lease:0x66279a43}
	I0422 05:19:27.878167   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:26:70:e3:26:68:f0 ID:1,26:70:e3:26:68:f0 Lease:0x662648b7}
	I0422 05:19:27.878193   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:96:fd:92:82:5b:dc ID:1,96:fd:92:82:5b:dc Lease:0x6627935e}
	I0422 05:19:27.878205   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:c6:dd:3d:cf:f0:d2 ID:1,c6:dd:3d:cf:f0:d2 Lease:0x662648ae}
	I0422 05:19:27.878212   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:6:29:7d:cb:c:52 ID:1,6:29:7d:cb:c:52 Lease:0x66279261}
	I0422 05:19:27.878219   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:7e:d6:67:50:b9:d1 ID:1,7e:d6:67:50:b9:d1 Lease:0x66279159}
	I0422 05:19:27.878226   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:22:39:f9:6a:26:1b ID:1,22:39:f9:6a:26:1b Lease:0x66263f35}
	I0422 05:19:27.878231   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:8e:c0:48:d9:4c:87 ID:1,8e:c0:48:d9:4c:87 Lease:0x66278f74}
	I0422 05:19:27.878238   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x66278e11}
	I0422 05:19:29.879024   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Attempt 5
	I0422 05:19:29.879041   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:29.879102   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | hyperkit pid from json: 11569
	I0422 05:19:29.879958   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Searching for 3e:3:77:57:f4:fe in /var/db/dhcpd_leases ...
	I0422 05:19:29.880048   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Found 46 entries in /var/db/dhcpd_leases!
	I0422 05:19:29.880060   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:3e:3:77:57:f4:fe ID:1,3e:3:77:57:f4:fe Lease:0x6627a750}
	I0422 05:19:29.880068   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | Found match: 3e:3:77:57:f4:fe
	I0422 05:19:29.880077   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | IP: 192.169.0.47
	I0422 05:19:29.880128   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetConfigRaw
	I0422 05:19:29.880783   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:29.880916   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:29.881017   11560 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 05:19:29.881031   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetState
	I0422 05:19:29.881121   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 05:19:29.881200   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) DBG | hyperkit pid from json: 11569
	I0422 05:19:29.882067   11560 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 05:19:29.882078   11560 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 05:19:29.882085   11560 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 05:19:29.882091   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:29.882192   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:29.882283   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:29.882382   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:29.882465   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:29.882591   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:29.882793   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:29.882801   11560 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 05:19:30.932063   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 05:19:30.932076   11560 main.go:141] libmachine: Detecting the provisioner...
	I0422 05:19:30.932082   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:30.932209   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:30.932309   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:30.932393   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:30.932499   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:30.932634   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:30.932787   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:30.932795   11560 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 05:19:30.982207   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 05:19:30.982254   11560 main.go:141] libmachine: found compatible host: buildroot
	I0422 05:19:30.982260   11560 main.go:141] libmachine: Provisioning with buildroot...
	I0422 05:19:30.982266   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetMachineName
	I0422 05:19:30.982411   11560 buildroot.go:166] provisioning hostname "default-k8s-diff-port-654000"
	I0422 05:19:30.982424   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetMachineName
	I0422 05:19:30.982519   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:30.982607   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:30.982698   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:30.982773   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:30.982872   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:30.983017   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:30.983156   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:30.983166   11560 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-654000 && echo "default-k8s-diff-port-654000" | sudo tee /etc/hostname
	I0422 05:19:31.043153   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-654000
	
	I0422 05:19:31.043175   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:31.043313   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:31.043420   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.043518   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.043621   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:31.043756   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:31.043915   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:31.043934   11560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-654000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-654000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-654000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 05:19:31.099776   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 05:19:31.099797   11560 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18711-1033/.minikube CaCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18711-1033/.minikube}
	I0422 05:19:31.099812   11560 buildroot.go:174] setting up certificates
	I0422 05:19:31.099821   11560 provision.go:84] configureAuth start
	I0422 05:19:31.099828   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetMachineName
	I0422 05:19:31.099963   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetIP
	I0422 05:19:31.100084   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:31.100183   11560 provision.go:143] copyHostCerts
	I0422 05:19:31.100263   11560 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem, removing ...
	I0422 05:19:31.100273   11560 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem
	I0422 05:19:31.100433   11560 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/ca.pem (1082 bytes)
	I0422 05:19:31.100675   11560 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem, removing ...
	I0422 05:19:31.100682   11560 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem
	I0422 05:19:31.100761   11560 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/cert.pem (1123 bytes)
	I0422 05:19:31.100947   11560 exec_runner.go:144] found /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem, removing ...
	I0422 05:19:31.100953   11560 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem
	I0422 05:19:31.101029   11560 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18711-1033/.minikube/key.pem (1675 bytes)
	I0422 05:19:31.101207   11560 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-654000 san=[127.0.0.1 192.169.0.47 default-k8s-diff-port-654000 localhost minikube]
	I0422 05:19:31.182399   11560 provision.go:177] copyRemoteCerts
	I0422 05:19:31.182485   11560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 05:19:31.182505   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:31.182674   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:31.182793   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.182915   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:31.183042   11560 sshutil.go:53] new ssh client: &{IP:192.169.0.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/id_rsa Username:docker}
	I0422 05:19:31.215817   11560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0422 05:19:31.235660   11560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 05:19:31.254648   11560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0422 05:19:31.274899   11560 provision.go:87] duration metric: took 175.065407ms to configureAuth
	I0422 05:19:31.274914   11560 buildroot.go:189] setting minikube options for container-runtime
	I0422 05:19:31.275062   11560 config.go:182] Loaded profile config "default-k8s-diff-port-654000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 05:19:31.275077   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:31.275224   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:31.275314   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:31.275419   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.275513   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.275594   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:31.275714   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:31.275848   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:31.275856   11560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 05:19:31.325998   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0422 05:19:31.326010   11560 buildroot.go:70] root file system type: tmpfs
	I0422 05:19:31.326089   11560 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 05:19:31.326102   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:31.326230   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:31.326314   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.326398   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.326480   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:31.326622   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:31.326757   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:31.326798   11560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 05:19:31.387987   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 05:19:31.388007   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:31.388162   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:31.388243   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.388325   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:31.388415   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:31.388545   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:31.388682   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:31.388694   11560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 05:19:32.889960   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0422 05:19:32.889974   11560 main.go:141] libmachine: Checking connection to Docker...
	I0422 05:19:32.889982   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetURL
	I0422 05:19:32.890144   11560 main.go:141] libmachine: Docker is up and running!
	I0422 05:19:32.890153   11560 main.go:141] libmachine: Reticulating splines...
	I0422 05:19:32.890157   11560 client.go:171] duration metric: took 13.761485555s to LocalClient.Create
	I0422 05:19:32.890170   11560 start.go:167] duration metric: took 13.761527318s to libmachine.API.Create "default-k8s-diff-port-654000"
	I0422 05:19:32.890181   11560 start.go:293] postStartSetup for "default-k8s-diff-port-654000" (driver="hyperkit")
	I0422 05:19:32.890188   11560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 05:19:32.890207   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:32.890346   11560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 05:19:32.890359   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:32.890476   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:32.890608   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:32.890714   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:32.890815   11560 sshutil.go:53] new ssh client: &{IP:192.169.0.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/id_rsa Username:docker}
	I0422 05:19:32.930567   11560 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 05:19:32.936443   11560 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 05:19:32.936463   11560 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/addons for local assets ...
	I0422 05:19:32.936579   11560 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18711-1033/.minikube/files for local assets ...
	I0422 05:19:32.936776   11560 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem -> 14842.pem in /etc/ssl/certs
	I0422 05:19:32.936989   11560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 05:19:32.952875   11560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/ssl/certs/14842.pem --> /etc/ssl/certs/14842.pem (1708 bytes)
	I0422 05:19:32.973708   11560 start.go:296] duration metric: took 83.519123ms for postStartSetup
	I0422 05:19:32.973738   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetConfigRaw
	I0422 05:19:32.974401   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetIP
	I0422 05:19:32.974569   11560 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/default-k8s-diff-port-654000/config.json ...
	I0422 05:19:32.974887   11560 start.go:128] duration metric: took 13.899226987s to createHost
	I0422 05:19:32.974900   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:32.975008   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:32.975103   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:32.975198   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:32.975291   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:32.975418   11560 main.go:141] libmachine: Using SSH client type: native
	I0422 05:19:32.975548   11560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6f0ab80] 0x6f0d8e0 <nil>  [] 0s} 192.169.0.47 22 <nil> <nil>}
	I0422 05:19:32.975556   11560 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 05:19:33.025410   11560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713788372.691988258
	
	I0422 05:19:33.025424   11560 fix.go:216] guest clock: 1713788372.691988258
	I0422 05:19:33.025430   11560 fix.go:229] Guest: 2024-04-22 05:19:32.691988258 -0700 PDT Remote: 2024-04-22 05:19:32.974895 -0700 PDT m=+14.457867689 (delta=-282.906742ms)
	I0422 05:19:33.025450   11560 fix.go:200] guest clock delta is within tolerance: -282.906742ms
	I0422 05:19:33.025455   11560 start.go:83] releasing machines lock for "default-k8s-diff-port-654000", held for 13.949874501s
	I0422 05:19:33.025475   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:33.025605   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetIP
	I0422 05:19:33.025698   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:33.025991   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:33.026098   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .DriverName
	I0422 05:19:33.026171   11560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 05:19:33.026207   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:33.026219   11560 ssh_runner.go:195] Run: cat /version.json
	I0422 05:19:33.026232   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHHostname
	I0422 05:19:33.026323   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:33.026360   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHPort
	I0422 05:19:33.026441   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:33.026445   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHKeyPath
	I0422 05:19:33.026551   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:33.026565   11560 main.go:141] libmachine: (default-k8s-diff-port-654000) Calling .GetSSHUsername
	I0422 05:19:33.026632   11560 sshutil.go:53] new ssh client: &{IP:192.169.0.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/id_rsa Username:docker}
	I0422 05:19:33.026665   11560 sshutil.go:53] new ssh client: &{IP:192.169.0.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/default-k8s-diff-port-654000/id_rsa Username:docker}
	I0422 05:19:33.104808   11560 ssh_runner.go:195] Run: systemctl --version
	I0422 05:19:33.109579   11560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 05:19:33.113877   11560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 05:19:33.113929   11560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 05:19:33.127276   11560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 05:19:33.127290   11560 start.go:494] detecting cgroup driver to use...
	I0422 05:19:33.127394   11560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 05:19:33.142998   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0422 05:19:33.152467   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 05:19:33.160816   11560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 05:19:33.160862   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 05:19:33.169590   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 05:19:33.177991   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 05:19:33.188048   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 05:19:33.196654   11560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 05:19:33.205733   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 05:19:33.214510   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 05:19:33.223378   11560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 05:19:33.232357   11560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 05:19:33.240393   11560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 05:19:33.248344   11560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 05:19:33.348818   11560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 05:19:33.367963   11560 start.go:494] detecting cgroup driver to use...
	I0422 05:19:33.368037   11560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 05:19:33.392269   11560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 05:19:33.406841   11560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 05:19:33.426380   11560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 05:19:33.437554   11560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 05:19:33.448645   11560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0422 05:19:33.468155   11560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 05:19:33.479066   11560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 05:19:33.494102   11560 ssh_runner.go:195] Run: which cri-dockerd
	I0422 05:19:33.497201   11560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 05:19:33.504433   11560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 05:19:33.517855   11560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 05:19:33.625618   11560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 05:19:33.727436   11560 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 05:19:33.727553   11560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 05:19:33.743990   11560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 05:19:33.846037   11560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 05:20:34.882938   11560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.037019937s)
	I0422 05:20:34.883006   11560 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0422 05:20:34.917896   11560 out.go:177] 
	W0422 05:20:34.939713   11560 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 22 12:19:31 default-k8s-diff-port-654000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:31.373069935Z" level=info msg="Starting up"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:31.373698857Z" level=info msg="containerd not running, starting managed containerd"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:31.374293540Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.391854291Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405714600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405776727Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405836708Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405871814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405948318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405991117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406134648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406174152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406204466Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406232957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406314230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406519060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408706897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408765188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408900217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408943330Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.409032377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.409095609Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.409128947Z" level=info msg="metadata content store policy set" policy=shared
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411632830Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411717805Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411765068Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411803712Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411836672Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411925441Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412134448Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412264549Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412309424Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412340994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412372702Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412404545Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412434767Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412465037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412496315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412532816Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412565022Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412594750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412657603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412693700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412726558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412757560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412787766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412817903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412847734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412878237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412911784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412947032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412977960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413008656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413038913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413072438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413107845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413138960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413171630Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413245627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413288317Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413322364Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413351266Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413427201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413461814Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413491317Z" level=info msg="NRI interface is disabled by configuration."
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413702134Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413764878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413822083Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413857512Z" level=info msg="containerd successfully booted in 0.023617s"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.396338823Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.405950609Z" level=info msg="Loading containers: start."
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.516554427Z" level=info msg="Loading containers: done."
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.527816887Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.527929873Z" level=info msg="Daemon has completed initialization"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.553596621Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.553781649Z" level=info msg="API listen on [::]:2376"
	Apr 22 12:19:32 default-k8s-diff-port-654000 systemd[1]: Started Docker Application Container Engine.
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.524753183Z" level=info msg="Processing signal 'terminated'"
	Apr 22 12:19:33 default-k8s-diff-port-654000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.525711445Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.526082853Z" level=info msg="Daemon shutdown complete"
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.526144811Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.526212471Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 22 12:19:34 default-k8s-diff-port-654000 systemd[1]: docker.service: Deactivated successfully.
	Apr 22 12:19:34 default-k8s-diff-port-654000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 22 12:19:34 default-k8s-diff-port-654000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 12:19:34 default-k8s-diff-port-654000 dockerd[860]: time="2024-04-22T12:19:34.578122423Z" level=info msg="Starting up"
	Apr 22 12:20:35 default-k8s-diff-port-654000 dockerd[860]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 22 12:20:35 default-k8s-diff-port-654000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 22 12:20:35 default-k8s-diff-port-654000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 22 12:20:35 default-k8s-diff-port-654000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 22 12:19:31 default-k8s-diff-port-654000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:31.373069935Z" level=info msg="Starting up"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:31.373698857Z" level=info msg="containerd not running, starting managed containerd"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:31.374293540Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.391854291Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405714600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405776727Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405836708Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405871814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405948318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.405991117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406134648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406174152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406204466Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406232957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406314230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.406519060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408706897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408765188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408900217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.408943330Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.409032377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.409095609Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.409128947Z" level=info msg="metadata content store policy set" policy=shared
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411632830Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411717805Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411765068Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411803712Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411836672Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.411925441Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412134448Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412264549Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412309424Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412340994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412372702Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412404545Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412434767Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412465037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412496315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412532816Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412565022Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412594750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412657603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412693700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412726558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412757560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412787766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412817903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412847734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412878237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412911784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412947032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.412977960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413008656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413038913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413072438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413107845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413138960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413171630Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413245627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413288317Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413322364Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413351266Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413427201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413461814Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413491317Z" level=info msg="NRI interface is disabled by configuration."
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413702134Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413764878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413822083Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 22 12:19:31 default-k8s-diff-port-654000 dockerd[519]: time="2024-04-22T12:19:31.413857512Z" level=info msg="containerd successfully booted in 0.023617s"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.396338823Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.405950609Z" level=info msg="Loading containers: start."
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.516554427Z" level=info msg="Loading containers: done."
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.527816887Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.527929873Z" level=info msg="Daemon has completed initialization"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.553596621Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 22 12:19:32 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:32.553781649Z" level=info msg="API listen on [::]:2376"
	Apr 22 12:19:32 default-k8s-diff-port-654000 systemd[1]: Started Docker Application Container Engine.
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.524753183Z" level=info msg="Processing signal 'terminated'"
	Apr 22 12:19:33 default-k8s-diff-port-654000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.525711445Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.526082853Z" level=info msg="Daemon shutdown complete"
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.526144811Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 22 12:19:33 default-k8s-diff-port-654000 dockerd[513]: time="2024-04-22T12:19:33.526212471Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 22 12:19:34 default-k8s-diff-port-654000 systemd[1]: docker.service: Deactivated successfully.
	Apr 22 12:19:34 default-k8s-diff-port-654000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 22 12:19:34 default-k8s-diff-port-654000 systemd[1]: Starting Docker Application Container Engine...
	Apr 22 12:19:34 default-k8s-diff-port-654000 dockerd[860]: time="2024-04-22T12:19:34.578122423Z" level=info msg="Starting up"
	Apr 22 12:20:35 default-k8s-diff-port-654000 dockerd[860]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 22 12:20:35 default-k8s-diff-port-654000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 22 12:20:35 default-k8s-diff-port-654000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 22 12:20:35 default-k8s-diff-port-654000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0422 05:20:34.939792   11560 out.go:239] * 
	* 
	W0422 05:20:34.940414   11560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 05:20:35.003578   11560 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-654000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000: exit status 6 (164.954491ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 05:20:35.234206   11619 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-654000" does not appear in /Users/jenkins/minikube-integration/18711-1033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-654000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-654000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-654000 create -f testdata/busybox.yaml: exit status 1 (37.394199ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-654000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-654000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000: exit status 6 (149.726504ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 05:20:35.423663   11625 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-654000" does not appear in /Users/jenkins/minikube-integration/18711-1033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-654000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000: exit status 6 (149.050698ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 05:20:35.572943   11630 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-654000" does not appear in /Users/jenkins/minikube-integration/18711-1033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-654000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (59.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-654000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0422 05:20:46.951350    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
E0422 05:20:56.210917    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-654000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (59.754002074s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-654000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-654000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-654000 describe deploy/metrics-server -n kube-system: exit status 1 (37.779056ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-654000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-654000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000: exit status 6 (156.350253ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 05:21:35.521672   11675 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-654000" does not appear in /Users/jenkins/minikube-integration/18711-1033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-654000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (59.95s)

                                                
                                    

Test pass (303/332)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.78
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.32
9 TestDownloadOnly/v1.20.0/DeleteAll 0.4
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 10.57
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.32
18 TestDownloadOnly/v1.30.0/DeleteAll 0.4
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.38
21 TestBinaryMirror 1
22 TestOffline 96.44
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.18
27 TestAddons/Setup 143.5
29 TestAddons/parallel/Registry 13.65
30 TestAddons/parallel/Ingress 20.98
31 TestAddons/parallel/InspektorGadget 10.5
32 TestAddons/parallel/MetricsServer 5.48
33 TestAddons/parallel/HelmTiller 14.44
35 TestAddons/parallel/CSI 73.54
36 TestAddons/parallel/Headlamp 15.17
37 TestAddons/parallel/CloudSpanner 5.41
38 TestAddons/parallel/LocalPath 56.45
39 TestAddons/parallel/NvidiaDevicePlugin 5.34
40 TestAddons/parallel/Yakd 5
43 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/StoppedEnableDisable 5.96
45 TestCertOptions 41.51
46 TestCertExpiration 365.27
47 TestDockerFlags 42.9
48 TestForceSystemdFlag 42.72
49 TestForceSystemdEnv 43.21
52 TestHyperKitDriverInstallOrUpdate 8.64
55 TestErrorSpam/setup 38.11
56 TestErrorSpam/start 1.65
57 TestErrorSpam/status 0.54
58 TestErrorSpam/pause 1.38
59 TestErrorSpam/unpause 1.42
60 TestErrorSpam/stop 153.89
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 84.47
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.79
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
72 TestFunctional/serial/CacheCmd/cache/add_local 1.54
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.09
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.15
77 TestFunctional/serial/CacheCmd/cache/delete 0.18
78 TestFunctional/serial/MinikubeKubectlCmd 0.99
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.5
80 TestFunctional/serial/ExtraConfig 43.38
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 2.86
83 TestFunctional/serial/LogsFileCmd 2.85
84 TestFunctional/serial/InvalidService 4.72
86 TestFunctional/parallel/ConfigCmd 0.62
87 TestFunctional/parallel/DashboardCmd 9.76
88 TestFunctional/parallel/DryRun 1.15
89 TestFunctional/parallel/InternationalLanguage 0.51
90 TestFunctional/parallel/StatusCmd 0.55
94 TestFunctional/parallel/ServiceCmdConnect 8.6
95 TestFunctional/parallel/AddonsCmd 0.3
96 TestFunctional/parallel/PersistentVolumeClaim 30.23
98 TestFunctional/parallel/SSHCmd 0.32
99 TestFunctional/parallel/CpCmd 1.3
100 TestFunctional/parallel/MySQL 26.29
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.35
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
110 TestFunctional/parallel/License 0.57
111 TestFunctional/parallel/Version/short 0.11
112 TestFunctional/parallel/Version/components 0.41
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
117 TestFunctional/parallel/ImageCommands/ImageBuild 2.31
118 TestFunctional/parallel/ImageCommands/Setup 2.44
119 TestFunctional/parallel/DockerEnv/bash 0.94
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.29
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.54
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.74
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.42
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.45
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.34
130 TestFunctional/parallel/ServiceCmd/DeployApp 12.15
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.17
136 TestFunctional/parallel/ServiceCmd/List 0.42
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
139 TestFunctional/parallel/ServiceCmd/Format 0.29
140 TestFunctional/parallel/ServiceCmd/URL 0.28
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
148 TestFunctional/parallel/ProfileCmd/profile_list 0.31
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
150 TestFunctional/parallel/MountCmd/any-port 6.24
151 TestFunctional/parallel/MountCmd/specific-port 1.78
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
153 TestFunctional/delete_addon-resizer_images 0.14
154 TestFunctional/delete_my-image_image 0.06
155 TestFunctional/delete_minikube_cached_images 0.05
159 TestMultiControlPlane/serial/StartCluster 439.23
160 TestMultiControlPlane/serial/DeployApp 5.45
161 TestMultiControlPlane/serial/PingHostFromPods 1.42
162 TestMultiControlPlane/serial/AddWorkerNode 43.32
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 227.96
171 TestImageBuild/serial/Setup 40
172 TestImageBuild/serial/NormalBuild 1.23
173 TestImageBuild/serial/BuildWithBuildArg 0.52
174 TestImageBuild/serial/BuildWithDockerIgnore 0.26
175 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.25
179 TestJSONOutput/start/Command 83.12
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.47
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.44
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 8.32
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.77
207 TestMainNoArgs 0.09
208 TestMinikubeProfile 91.84
211 TestMountStart/serial/StartWithMountFirst 21.65
212 TestMountStart/serial/VerifyMountFirst 0.31
213 TestMountStart/serial/StartWithMountSecond 18.12
214 TestMountStart/serial/VerifyMountSecond 0.31
215 TestMountStart/serial/DeleteFirst 2.39
216 TestMountStart/serial/VerifyMountPostDelete 0.31
217 TestMountStart/serial/Stop 2.39
218 TestMountStart/serial/RestartStopped 20.3
219 TestMountStart/serial/VerifyMountPostStop 0.32
222 TestMultiNode/serial/FreshStart2Nodes 210.59
223 TestMultiNode/serial/DeployApp2Nodes 4.87
224 TestMultiNode/serial/PingHostFrom2Pods 0.92
225 TestMultiNode/serial/AddNode 35.25
226 TestMultiNode/serial/MultiNodeLabels 0.05
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 5.49
229 TestMultiNode/serial/StopNode 2.86
230 TestMultiNode/serial/StartAfterStop 31.59
231 TestMultiNode/serial/RestartKeepsNodes 259.6
232 TestMultiNode/serial/DeleteNode 3.49
233 TestMultiNode/serial/StopMultiNode 16.82
235 TestMultiNode/serial/ValidateNameConflict 45.73
241 TestScheduledStopUnix 109.3
242 TestSkaffold 117.14
245 TestRunningBinaryUpgrade 99.59
247 TestKubernetesUpgrade 121.65
260 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.04
261 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.56
262 TestStoppedBinaryUpgrade/Setup 1.18
263 TestStoppedBinaryUpgrade/Upgrade 87.65
265 TestPause/serial/Start 91.39
266 TestStoppedBinaryUpgrade/MinikubeLogs 2.93
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.51
276 TestNoKubernetes/serial/StartWithK8s 39.43
277 TestNoKubernetes/serial/StartWithStopK8s 17.5
278 TestNoKubernetes/serial/Start 20.91
279 TestPause/serial/SecondStartNoReconfiguration 45.83
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.14
281 TestNoKubernetes/serial/ProfileList 17.96
282 TestNoKubernetes/serial/Stop 2.45
283 TestNoKubernetes/serial/StartNoArgs 19.33
284 TestPause/serial/Pause 0.56
285 TestPause/serial/VerifyStatus 0.17
286 TestPause/serial/Unpause 0.51
287 TestPause/serial/PauseAgain 0.6
288 TestPause/serial/DeletePaused 5.81
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
290 TestNetworkPlugins/group/auto/Start 181
291 TestPause/serial/VerifyDeletedResources 0.21
292 TestNetworkPlugins/group/calico/Start 88.81
293 TestNetworkPlugins/group/calico/ControllerPod 6.01
294 TestNetworkPlugins/group/calico/KubeletFlags 0.16
295 TestNetworkPlugins/group/calico/NetCatPod 11.2
296 TestNetworkPlugins/group/calico/DNS 0.12
297 TestNetworkPlugins/group/calico/Localhost 0.1
298 TestNetworkPlugins/group/calico/HairPin 0.1
299 TestNetworkPlugins/group/custom-flannel/Start 178.84
300 TestNetworkPlugins/group/auto/KubeletFlags 0.16
301 TestNetworkPlugins/group/auto/NetCatPod 10.14
302 TestNetworkPlugins/group/auto/DNS 0.12
303 TestNetworkPlugins/group/auto/Localhost 0.11
304 TestNetworkPlugins/group/auto/HairPin 0.1
305 TestNetworkPlugins/group/false/Start 80.86
306 TestNetworkPlugins/group/false/KubeletFlags 0.16
307 TestNetworkPlugins/group/false/NetCatPod 10.14
308 TestNetworkPlugins/group/false/DNS 0.13
309 TestNetworkPlugins/group/false/Localhost 0.1
310 TestNetworkPlugins/group/false/HairPin 0.1
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.15
313 TestNetworkPlugins/group/custom-flannel/DNS 0.14
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
316 TestNetworkPlugins/group/kindnet/Start 63.02
317 TestNetworkPlugins/group/flannel/Start 61.7
318 TestNetworkPlugins/group/kindnet/ControllerPod 6
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
320 TestNetworkPlugins/group/kindnet/NetCatPod 12.14
321 TestNetworkPlugins/group/flannel/ControllerPod 6
322 TestNetworkPlugins/group/kindnet/DNS 0.13
323 TestNetworkPlugins/group/kindnet/Localhost 0.09
324 TestNetworkPlugins/group/kindnet/HairPin 0.1
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
326 TestNetworkPlugins/group/flannel/NetCatPod 11.14
327 TestNetworkPlugins/group/flannel/DNS 0.13
328 TestNetworkPlugins/group/flannel/Localhost 0.1
329 TestNetworkPlugins/group/flannel/HairPin 0.1
330 TestNetworkPlugins/group/enable-default-cni/Start 55.24
331 TestNetworkPlugins/group/bridge/Start 56.63
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
338 TestNetworkPlugins/group/bridge/NetCatPod 12.14
339 TestNetworkPlugins/group/bridge/DNS 0.13
340 TestNetworkPlugins/group/bridge/Localhost 0.1
341 TestNetworkPlugins/group/bridge/HairPin 0.1
342 TestNetworkPlugins/group/kubenet/Start 84.18
344 TestStartStop/group/old-k8s-version/serial/FirstStart 166.42
345 TestNetworkPlugins/group/kubenet/KubeletFlags 0.16
346 TestNetworkPlugins/group/kubenet/NetCatPod 11.15
347 TestNetworkPlugins/group/kubenet/DNS 0.13
348 TestNetworkPlugins/group/kubenet/Localhost 0.11
349 TestNetworkPlugins/group/kubenet/HairPin 0.1
351 TestStartStop/group/no-preload/serial/FirstStart 54.11
352 TestStartStop/group/no-preload/serial/DeployApp 8.2
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
354 TestStartStop/group/no-preload/serial/Stop 8.42
355 TestStartStop/group/old-k8s-version/serial/DeployApp 9.33
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
357 TestStartStop/group/no-preload/serial/SecondStart 293.47
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
359 TestStartStop/group/old-k8s-version/serial/Stop 8.41
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
361 TestStartStop/group/old-k8s-version/serial/SecondStart 392.94
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
364 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
365 TestStartStop/group/no-preload/serial/Pause 1.94
367 TestStartStop/group/embed-certs/serial/FirstStart 54.73
368 TestStartStop/group/embed-certs/serial/DeployApp 9.21
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
370 TestStartStop/group/embed-certs/serial/Stop 8.43
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
372 TestStartStop/group/embed-certs/serial/SecondStart 292.11
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
376 TestStartStop/group/old-k8s-version/serial/Pause 1.96
381 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.42
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.33
383 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.67
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.17
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.97
389 TestStartStop/group/newest-cni/serial/FirstStart 52.14
390 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
392 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.17
393 TestStartStop/group/embed-certs/serial/Pause 2.14
394 TestStartStop/group/newest-cni/serial/DeployApp 0
395 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
396 TestStartStop/group/newest-cni/serial/Stop 8.42
397 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
398 TestStartStop/group/newest-cni/serial/SecondStart 52.97
399 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
401 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.17
402 TestStartStop/group/newest-cni/serial/Pause 1.83
x
+
TestDownloadOnly/v1.20.0/json-events (16.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-040000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-040000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (16.779604059s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-040000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-040000: exit status 85 (319.340067ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-040000 | jenkins | v1.33.0 | 22 Apr 24 03:36 PDT |          |
	|         | -p download-only-040000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 03:36:58
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 03:36:58.205877    1486 out.go:291] Setting OutFile to fd 1 ...
	I0422 03:36:58.206063    1486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:36:58.206068    1486 out.go:304] Setting ErrFile to fd 2...
	I0422 03:36:58.206072    1486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:36:58.206238    1486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	W0422 03:36:58.206324    1486 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18711-1033/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18711-1033/.minikube/config/config.json: no such file or directory
	I0422 03:36:58.208116    1486 out.go:298] Setting JSON to true
	I0422 03:36:58.232462    1486 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":389,"bootTime":1713781829,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 03:36:58.232578    1486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 03:36:58.254720    1486 out.go:97] [download-only-040000] minikube v1.33.0 on Darwin 14.4.1
	I0422 03:36:58.277496    1486 out.go:169] MINIKUBE_LOCATION=18711
	W0422 03:36:58.254905    1486 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball: no such file or directory
	I0422 03:36:58.254884    1486 notify.go:220] Checking for updates...
	I0422 03:36:58.326298    1486 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 03:36:58.347437    1486 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 03:36:58.369284    1486 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 03:36:58.410946    1486 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	W0422 03:36:58.453353    1486 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 03:36:58.453854    1486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 03:36:58.528326    1486 out.go:97] Using the hyperkit driver based on user configuration
	I0422 03:36:58.528391    1486 start.go:297] selected driver: hyperkit
	I0422 03:36:58.528405    1486 start.go:901] validating driver "hyperkit" against <nil>
	I0422 03:36:58.528629    1486 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 03:36:58.529048    1486 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18711-1033/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0422 03:36:58.756683    1486 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0422 03:36:58.761064    1486 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 03:36:58.761084    1486 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0422 03:36:58.761114    1486 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 03:36:58.765345    1486 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0422 03:36:58.765515    1486 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 03:36:58.765568    1486 cni.go:84] Creating CNI manager for ""
	I0422 03:36:58.765583    1486 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0422 03:36:58.765646    1486 start.go:340] cluster config:
	{Name:download-only-040000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 03:36:58.765863    1486 iso.go:125] acquiring lock: {Name:mk174d786084574fba345b763762a2b8adb514c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 03:36:58.787242    1486 out.go:97] Downloading VM boot image ...
	I0422 03:36:58.787345    1486 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 03:37:03.771532    1486 out.go:97] Starting "download-only-040000" primary control-plane node in "download-only-040000" cluster
	I0422 03:37:03.771568    1486 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0422 03:37:03.866282    1486 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0422 03:37:03.866314    1486 cache.go:56] Caching tarball of preloaded images
	I0422 03:37:03.866679    1486 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0422 03:37:03.888323    1486 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0422 03:37:03.888340    1486 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0422 03:37:03.962858    1486 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0422 03:37:09.233190    1486 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0422 03:37:09.233405    1486 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0422 03:37:09.782439    1486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0422 03:37:09.782665    1486 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/download-only-040000/config.json ...
	I0422 03:37:09.782688    1486 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/download-only-040000/config.json: {Name:mkb38a87518413b64de798ee88b6a3ae2fea6202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 03:37:09.782990    1486 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0422 03:37:09.783287    1486 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-040000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-040000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-040000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (10.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-401000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-401000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit : (10.569783727s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (10.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-401000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-401000: exit status 85 (319.677117ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-040000 | jenkins | v1.33.0 | 22 Apr 24 03:36 PDT |                     |
	|         | -p download-only-040000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 22 Apr 24 03:37 PDT | 22 Apr 24 03:37 PDT |
	| delete  | -p download-only-040000        | download-only-040000 | jenkins | v1.33.0 | 22 Apr 24 03:37 PDT | 22 Apr 24 03:37 PDT |
	| start   | -o=json --download-only        | download-only-401000 | jenkins | v1.33.0 | 22 Apr 24 03:37 PDT |                     |
	|         | -p download-only-401000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 03:37:16
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 03:37:16.077061    1527 out.go:291] Setting OutFile to fd 1 ...
	I0422 03:37:16.077307    1527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:37:16.077312    1527 out.go:304] Setting ErrFile to fd 2...
	I0422 03:37:16.077316    1527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:37:16.077489    1527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 03:37:16.079006    1527 out.go:298] Setting JSON to true
	I0422 03:37:16.102543    1527 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":407,"bootTime":1713781829,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 03:37:16.102625    1527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 03:37:16.123488    1527 out.go:97] [download-only-401000] minikube v1.33.0 on Darwin 14.4.1
	I0422 03:37:16.145455    1527 out.go:169] MINIKUBE_LOCATION=18711
	I0422 03:37:16.123686    1527 notify.go:220] Checking for updates...
	I0422 03:37:16.166715    1527 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 03:37:16.188932    1527 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 03:37:16.211428    1527 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 03:37:16.232664    1527 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	W0422 03:37:16.275575    1527 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 03:37:16.276073    1527 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 03:37:16.306488    1527 out.go:97] Using the hyperkit driver based on user configuration
	I0422 03:37:16.306572    1527 start.go:297] selected driver: hyperkit
	I0422 03:37:16.306585    1527 start.go:901] validating driver "hyperkit" against <nil>
	I0422 03:37:16.306782    1527 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 03:37:16.306977    1527 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18711-1033/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0422 03:37:16.316728    1527 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0422 03:37:16.320498    1527 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 03:37:16.320517    1527 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0422 03:37:16.320548    1527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 03:37:16.323143    1527 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0422 03:37:16.323283    1527 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 03:37:16.323339    1527 cni.go:84] Creating CNI manager for ""
	I0422 03:37:16.323355    1527 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 03:37:16.323364    1527 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 03:37:16.323438    1527 start.go:340] cluster config:
	{Name:download-only-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 03:37:16.323546    1527 iso.go:125] acquiring lock: {Name:mk174d786084574fba345b763762a2b8adb514c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 03:37:16.344932    1527 out.go:97] Starting "download-only-401000" primary control-plane node in "download-only-401000" cluster
	I0422 03:37:16.344965    1527 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 03:37:16.425967    1527 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0422 03:37:16.425999    1527 cache.go:56] Caching tarball of preloaded images
	I0422 03:37:16.426372    1527 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 03:37:16.447479    1527 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0422 03:37:16.447509    1527 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0422 03:37:16.526622    1527 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0422 03:37:20.714075    1527 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0422 03:37:20.714271    1527 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0422 03:37:21.205322    1527 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 03:37:21.205573    1527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/download-only-401000/config.json ...
	I0422 03:37:21.205597    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/download-only-401000/config.json: {Name:mkd6c7aa5073528ddde897763d5bba4f61c1ef2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 03:37:21.205959    1527 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 03:37:21.206179    1527 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18711-1033/.minikube/cache/darwin/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-401000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-401000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-401000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestBinaryMirror (1s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-393000 --alsologtostderr --binary-mirror http://127.0.0.1:49342 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-393000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-393000
--- PASS: TestBinaryMirror (1.00s)

                                                
                                    
x
+
TestOffline (96.44s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-630000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-630000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (1m31.166131504s)
helpers_test.go:175: Cleaning up "offline-docker-630000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-630000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-630000: (5.269226859s)
--- PASS: TestOffline (96.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-483000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-483000: exit status 85 (197.410461ms)

                                                
                                                
-- stdout --
	* Profile "addons-483000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-483000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-483000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-483000: exit status 85 (176.434802ms)

                                                
                                                
-- stdout --
	* Profile "addons-483000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-483000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/Setup (143.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-483000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-483000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.498791724s)
--- PASS: TestAddons/Setup (143.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 10.676896ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-q979b" [1b0c1b65-bcbc-4cdd-aed9-5409600b7f49] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005304409s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fcnll" [77231cac-e101-4259-9d8c-841b74f55807] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003881476s
addons_test.go:340: (dbg) Run:  kubectl --context addons-483000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-483000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-483000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.965375784s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 ip
addons_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-483000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-483000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-483000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [050c6fa4-87bc-42a1-9ac6-37fb997b367e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [050c6fa4-87bc-42a1-9ac6-37fb997b367e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003654193s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-483000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.169.0.3
addons_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-darwin-amd64 -p addons-483000 addons disable ingress-dns --alsologtostderr -v=1: (1.484421872s)
addons_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p addons-483000 addons disable ingress --alsologtostderr -v=1: (7.559248888s)
--- PASS: TestAddons/parallel/Ingress (20.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-szsss" [f63a218b-f5ad-4372-adcd-d608c05fc1a3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005008133s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-483000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-483000: (5.499259716s)
--- PASS: TestAddons/parallel/InspektorGadget (10.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 1.955622ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-pgtdb" [5a1a0e1b-69af-42b8-91fe-e87c5ca34ed2] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004240533s
addons_test.go:415: (dbg) Run:  kubectl --context addons-483000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.48s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 1.546842ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-scjx4" [e9622e8b-5271-46ff-bbbe-d7afb7f6d538] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003749845s
addons_test.go:473: (dbg) Run:  kubectl --context addons-483000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-483000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.985415311s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 12.151925ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-483000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/22 03:40:06 [DEBUG] GET http://192.169.0.3:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-483000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b05408da-1fd3-4c80-83ba-77aa08985a99] Pending
helpers_test.go:344: "task-pv-pod" [b05408da-1fd3-4c80-83ba-77aa08985a99] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b05408da-1fd3-4c80-83ba-77aa08985a99] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003437557s
addons_test.go:584: (dbg) Run:  kubectl --context addons-483000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-483000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-483000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-483000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-483000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-483000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-483000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [99e08c09-6864-471e-b11d-f6f5dbe46ab3] Pending
helpers_test.go:344: "task-pv-pod-restore" [99e08c09-6864-471e-b11d-f6f5dbe46ab3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [99e08c09-6864-471e-b11d-f6f5dbe46ab3] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00386981s
addons_test.go:626: (dbg) Run:  kubectl --context addons-483000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-483000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-483000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-483000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.482795369s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (73.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-483000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-483000 --alsologtostderr -v=1: (1.160860807s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-59c7g" [4b552246-21eb-4fd2-8207-08c627bb4660] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-59c7g" [4b552246-21eb-4fd2-8207-08c627bb4660] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.005071047s
--- PASS: TestAddons/parallel/Headlamp (15.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-5c6xm" [ff71692f-159b-4bb5-94e3-2a7bcf26bf1e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003780431s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-483000
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-483000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-483000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [04622ffb-e6db-4915-b75b-2a241a43c60e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [04622ffb-e6db-4915-b75b-2a241a43c60e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [04622ffb-e6db-4915-b75b-2a241a43c60e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.004310383s
addons_test.go:891: (dbg) Run:  kubectl --context addons-483000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 ssh "cat /opt/local-path-provisioner/pvc-3a093e7e-2764-416b-86c7-bb74506cb8e4_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-483000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-483000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-483000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-483000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.791985018s)
--- PASS: TestAddons/parallel/LocalPath (56.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vc8nh" [2e4f8d51-d35e-4dc6-b40e-ee0ddf69b594] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004582325s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-483000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-fpflx" [2e6873b2-3ac6-4203-a47b-6d041094b725] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004101938s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-483000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-483000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.96s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-483000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-483000: (5.394992589s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-483000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-483000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-483000
--- PASS: TestAddons/StoppedEnableDisable (5.96s)

                                                
                                    
x
+
TestCertOptions (41.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-500000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-500000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (37.706088754s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-500000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-500000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-500000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-500000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-500000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-500000: (3.443715539s)
--- PASS: TestCertOptions (41.51s)

                                                
                                    
x
+
TestCertExpiration (365.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-374000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-374000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (2m34.358289787s)
E0422 04:54:53.372643    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-374000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-374000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (25.620866425s)
helpers_test.go:175: Cleaning up "cert-expiration-374000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-374000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-374000: (5.293229371s)
--- PASS: TestCertExpiration (365.27s)

                                                
                                    
x
+
TestDockerFlags (42.9s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-152000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-152000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (37.272801176s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-152000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-152000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-152000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-152000: (5.279950333s)
--- PASS: TestDockerFlags (42.90s)

                                                
                                    
x
+
TestForceSystemdFlag (42.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-054000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-054000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (37.244459644s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-054000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-054000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-054000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-054000: (5.285560855s)
--- PASS: TestForceSystemdFlag (42.72s)

                                                
                                    
x
+
TestForceSystemdEnv (43.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-331000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0422 04:51:16.429516    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-331000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (37.748607387s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-331000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-331000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-331000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-331000: (5.283975265s)
--- PASS: TestForceSystemdEnv (43.21s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.64s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.64s)

                                                
                                    
x
+
TestErrorSpam/setup (38.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-684000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-684000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 --driver=hyperkit : (38.109597643s)
--- PASS: TestErrorSpam/setup (38.11s)

                                                
                                    
x
+
TestErrorSpam/start (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 start --dry-run
--- PASS: TestErrorSpam/start (1.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 status
--- PASS: TestErrorSpam/status (0.54s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (153.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 stop: (3.408145952s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 stop: (1m15.241252109s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 stop
E0422 03:44:53.141382    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:53.161864    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:53.172124    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:53.192654    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:53.232937    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:53.313987    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:53.474190    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:53.795495    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:54.435901    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:55.717275    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:44:58.277616    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:45:03.398101    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:45:13.638564    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-684000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-684000 stop: (1m15.24214516s)
--- PASS: TestErrorSpam/stop (153.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18711-1033/.minikube/files/etc/test/nested/copy/1484/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-984000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0422 03:45:34.119327    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:46:15.062937    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-984000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m24.464580678s)
--- PASS: TestFunctional/serial/StartWithProxy (84.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-984000 --alsologtostderr -v=8
E0422 03:47:36.977917    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-984000 --alsologtostderr -v=8: (38.791352462s)
functional_test.go:659: soft start took 38.791867789s for "functional-984000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-984000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 cache add registry.k8s.io/pause:3.1: (1.090512908s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 cache add registry.k8s.io/pause:3.3: (1.008381941s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3792692708/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cache add minikube-local-cache-test:functional-984000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cache delete minikube-local-cache-test:functional-984000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-984000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (164.730693ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 kubectl -- --context functional-984000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-984000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-984000 get pods: (1.496627305s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.50s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-984000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-984000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.384245448s)
functional_test.go:757: restart took 43.384386038s for "functional-984000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-984000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 logs: (2.86085934s)
--- PASS: TestFunctional/serial/LogsCmd (2.86s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2094126875/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2094126875/001/logs.txt: (2.847852073s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-984000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-984000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-984000: exit status 115 (311.154524ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.5:32550 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-984000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-984000 delete -f testdata/invalidsvc.yaml: (1.263322138s)
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 config get cpus: exit status 14 (87.511648ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 config get cpus: exit status 14 (68.330438ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-984000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-984000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3095: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-984000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-984000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (675.22067ms)

                                                
                                                
-- stdout --
	* [functional-984000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 03:49:41.536195    3064 out.go:291] Setting OutFile to fd 1 ...
	I0422 03:49:41.536419    3064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:49:41.536424    3064 out.go:304] Setting ErrFile to fd 2...
	I0422 03:49:41.536428    3064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:49:41.536614    3064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 03:49:41.538032    3064 out.go:298] Setting JSON to false
	I0422 03:49:41.562914    3064 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1152,"bootTime":1713781829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 03:49:41.563002    3064 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 03:49:41.584531    3064 out.go:177] * [functional-984000] minikube v1.33.0 on Darwin 14.4.1
	I0422 03:49:41.648355    3064 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 03:49:41.626294    3064 notify.go:220] Checking for updates...
	I0422 03:49:41.690295    3064 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 03:49:41.711411    3064 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 03:49:41.785354    3064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 03:49:41.859355    3064 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	I0422 03:49:41.901355    3064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 03:49:41.939160    3064 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 03:49:41.939893    3064 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 03:49:41.939960    3064 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 03:49:41.950116    3064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50573
	I0422 03:49:41.950595    3064 main.go:141] libmachine: () Calling .GetVersion
	I0422 03:49:41.951205    3064 main.go:141] libmachine: Using API Version  1
	I0422 03:49:41.951218    3064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 03:49:41.951421    3064 main.go:141] libmachine: () Calling .GetMachineName
	I0422 03:49:41.951562    3064 main.go:141] libmachine: (functional-984000) Calling .DriverName
	I0422 03:49:41.951767    3064 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 03:49:41.952023    3064 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 03:49:41.952062    3064 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 03:49:41.961016    3064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50575
	I0422 03:49:41.961416    3064 main.go:141] libmachine: () Calling .GetVersion
	I0422 03:49:41.961730    3064 main.go:141] libmachine: Using API Version  1
	I0422 03:49:41.961747    3064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 03:49:41.961961    3064 main.go:141] libmachine: () Calling .GetMachineName
	I0422 03:49:41.962066    3064 main.go:141] libmachine: (functional-984000) Calling .DriverName
	I0422 03:49:41.990266    3064 out.go:177] * Using the hyperkit driver based on existing profile
	I0422 03:49:42.032399    3064 start.go:297] selected driver: hyperkit
	I0422 03:49:42.032426    3064 start.go:901] validating driver "hyperkit" against &{Name:functional-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:functional-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 03:49:42.032620    3064 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 03:49:42.058289    3064 out.go:177] 
	W0422 03:49:42.079586    3064 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0422 03:49:42.100585    3064 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-984000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-984000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-984000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (513.730973ms)

                                                
                                                
-- stdout --
	* [functional-984000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 03:49:42.680503    3082 out.go:291] Setting OutFile to fd 1 ...
	I0422 03:49:42.680750    3082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:49:42.680756    3082 out.go:304] Setting ErrFile to fd 2...
	I0422 03:49:42.680759    3082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 03:49:42.680966    3082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 03:49:42.682533    3082 out.go:298] Setting JSON to false
	I0422 03:49:42.706542    3082 start.go:129] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1153,"bootTime":1713781829,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0422 03:49:42.706715    3082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0422 03:49:42.728440    3082 out.go:177] * [functional-984000] minikube v1.33.0 sur Darwin 14.4.1
	I0422 03:49:42.770467    3082 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 03:49:42.770511    3082 notify.go:220] Checking for updates...
	I0422 03:49:42.812279    3082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	I0422 03:49:42.835457    3082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0422 03:49:42.856511    3082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 03:49:42.877327    3082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	I0422 03:49:42.898431    3082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 03:49:42.919891    3082 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 03:49:42.920349    3082 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 03:49:42.920408    3082 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 03:49:42.929330    3082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50583
	I0422 03:49:42.929786    3082 main.go:141] libmachine: () Calling .GetVersion
	I0422 03:49:42.930189    3082 main.go:141] libmachine: Using API Version  1
	I0422 03:49:42.930203    3082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 03:49:42.930414    3082 main.go:141] libmachine: () Calling .GetMachineName
	I0422 03:49:42.930538    3082 main.go:141] libmachine: (functional-984000) Calling .DriverName
	I0422 03:49:42.930743    3082 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 03:49:42.930972    3082 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 03:49:42.930998    3082 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 03:49:42.939434    3082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50585
	I0422 03:49:42.939762    3082 main.go:141] libmachine: () Calling .GetVersion
	I0422 03:49:42.940067    3082 main.go:141] libmachine: Using API Version  1
	I0422 03:49:42.940077    3082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 03:49:42.940297    3082 main.go:141] libmachine: () Calling .GetMachineName
	I0422 03:49:42.940405    3082 main.go:141] libmachine: (functional-984000) Calling .DriverName
	I0422 03:49:42.969328    3082 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0422 03:49:43.011413    3082 start.go:297] selected driver: hyperkit
	I0422 03:49:43.011434    3082 start.go:901] validating driver "hyperkit" against &{Name:functional-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:functional-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 03:49:43.011575    3082 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 03:49:43.036298    3082 out.go:177] 
	W0422 03:49:43.057565    3082 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0422 03:49:43.078607    3082 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-984000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-984000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-9nzp2" [8730444b-bbdc-42f0-b865-d97ca136dc5c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-9nzp2" [8730444b-bbdc-42f0-b865-d97ca136dc5c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005154021s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.5:32544
functional_test.go:1671: http://192.169.0.5:32544: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-9nzp2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.5:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.5:32544
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [811c67c3-e198-429c-a64e-022e51b0305b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004513388s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-984000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-984000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-984000 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-984000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-984000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4d45faa8-a98a-456d-ad50-2bc6764379d3] Pending
helpers_test.go:344: "sp-pod" [4d45faa8-a98a-456d-ad50-2bc6764379d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4d45faa8-a98a-456d-ad50-2bc6764379d3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.002739116s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-984000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-984000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-984000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [75427778-b5ba-423c-a12e-6f78c26eac59] Pending
helpers_test.go:344: "sp-pod" [75427778-b5ba-423c-a12e-6f78c26eac59] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [75427778-b5ba-423c-a12e-6f78c26eac59] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002781342s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-984000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh -n functional-984000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cp functional-984000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd1494038114/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh -n functional-984000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh -n functional-984000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-984000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-trrrr" [23818dd1-3d4f-4ef4-b24c-221f382fd1b4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-trrrr" [23818dd1-3d4f-4ef4-b24c-221f382fd1b4] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.007247184s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-984000 exec mysql-64454c8b5c-trrrr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-984000 exec mysql-64454c8b5c-trrrr -- mysql -ppassword -e "show databases;": exit status 1 (121.397698ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-984000 exec mysql-64454c8b5c-trrrr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-984000 exec mysql-64454c8b5c-trrrr -- mysql -ppassword -e "show databases;": exit status 1 (152.911302ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-984000 exec mysql-64454c8b5c-trrrr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1484/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo cat /etc/test/nested/copy/1484/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1484.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo cat /etc/ssl/certs/1484.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1484.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo cat /usr/share/ca-certificates/1484.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo cat /etc/ssl/certs/14842.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo cat /usr/share/ca-certificates/14842.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-984000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 ssh "sudo systemctl is-active crio": exit status 1 (226.266237ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-984000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-984000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-984000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-984000 image ls --format short --alsologtostderr:
I0422 03:49:46.192138    3115 out.go:291] Setting OutFile to fd 1 ...
I0422 03:49:46.192357    3115 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:46.192362    3115 out.go:304] Setting ErrFile to fd 2...
I0422 03:49:46.192366    3115 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:46.192552    3115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
I0422 03:49:46.193241    3115 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:46.193339    3115 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:46.193697    3115 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:46.193744    3115 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:46.203503    3115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50643
I0422 03:49:46.203990    3115 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:46.204504    3115 main.go:141] libmachine: Using API Version  1
I0422 03:49:46.204558    3115 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:46.204862    3115 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:46.204997    3115 main.go:141] libmachine: (functional-984000) Calling .GetState
I0422 03:49:46.205172    3115 main.go:141] libmachine: (functional-984000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 03:49:46.205312    3115 main.go:141] libmachine: (functional-984000) DBG | hyperkit pid from json: 2291
I0422 03:49:46.206779    3115 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:46.206808    3115 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:46.216553    3115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50645
I0422 03:49:46.217040    3115 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:46.217440    3115 main.go:141] libmachine: Using API Version  1
I0422 03:49:46.217450    3115 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:46.217696    3115 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:46.217822    3115 main.go:141] libmachine: (functional-984000) Calling .DriverName
I0422 03:49:46.218014    3115 ssh_runner.go:195] Run: systemctl --version
I0422 03:49:46.218032    3115 main.go:141] libmachine: (functional-984000) Calling .GetSSHHostname
I0422 03:49:46.218124    3115 main.go:141] libmachine: (functional-984000) Calling .GetSSHPort
I0422 03:49:46.218209    3115 main.go:141] libmachine: (functional-984000) Calling .GetSSHKeyPath
I0422 03:49:46.218310    3115 main.go:141] libmachine: (functional-984000) Calling .GetSSHUsername
I0422 03:49:46.218404    3115 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/functional-984000/id_rsa Username:docker}
I0422 03:49:46.254013    3115 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0422 03:49:46.279464    3115 main.go:141] libmachine: Making call to close driver server
I0422 03:49:46.279472    3115 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:46.279650    3115 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:46.279673    3115 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:46.279686    3115 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 03:49:46.279696    3115 main.go:141] libmachine: Making call to close driver server
I0422 03:49:46.279709    3115 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:46.279869    3115 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:46.279871    3115 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:46.279885    3115 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-984000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-984000 | 49378a7c06404 | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-984000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-984000 | 446fbf80c63a7 | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| docker.io/library/nginx                     | alpine            | 11d76b979f02d | 48.3MB |
| docker.io/library/nginx                     | latest            | 2ac752d7aeb1d | 188MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-984000 image ls --format table --alsologtostderr:
I0422 03:49:49.046391    3141 out.go:291] Setting OutFile to fd 1 ...
I0422 03:49:49.047114    3141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:49.047124    3141 out.go:304] Setting ErrFile to fd 2...
I0422 03:49:49.047130    3141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:49.047727    3141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
I0422 03:49:49.048573    3141 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:49.048683    3141 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:49.049088    3141 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:49.049160    3141 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:49.059130    3141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50676
I0422 03:49:49.059737    3141 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:49.060275    3141 main.go:141] libmachine: Using API Version  1
I0422 03:49:49.060287    3141 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:49.060583    3141 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:49.060736    3141 main.go:141] libmachine: (functional-984000) Calling .GetState
I0422 03:49:49.060851    3141 main.go:141] libmachine: (functional-984000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 03:49:49.060941    3141 main.go:141] libmachine: (functional-984000) DBG | hyperkit pid from json: 2291
I0422 03:49:49.062551    3141 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:49.062588    3141 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:49.071566    3141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50678
I0422 03:49:49.071932    3141 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:49.072289    3141 main.go:141] libmachine: Using API Version  1
I0422 03:49:49.072303    3141 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:49.072530    3141 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:49.072652    3141 main.go:141] libmachine: (functional-984000) Calling .DriverName
I0422 03:49:49.072822    3141 ssh_runner.go:195] Run: systemctl --version
I0422 03:49:49.072839    3141 main.go:141] libmachine: (functional-984000) Calling .GetSSHHostname
I0422 03:49:49.072940    3141 main.go:141] libmachine: (functional-984000) Calling .GetSSHPort
I0422 03:49:49.073028    3141 main.go:141] libmachine: (functional-984000) Calling .GetSSHKeyPath
I0422 03:49:49.073144    3141 main.go:141] libmachine: (functional-984000) Calling .GetSSHUsername
I0422 03:49:49.073233    3141 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/functional-984000/id_rsa Username:docker}
I0422 03:49:49.108596    3141 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0422 03:49:49.125418    3141 main.go:141] libmachine: Making call to close driver server
I0422 03:49:49.125427    3141 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:49.125610    3141 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:49.125648    3141 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 03:49:49.125655    3141 main.go:141] libmachine: Making call to close driver server
I0422 03:49:49.125659    3141 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:49.125657    3141 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:49.125838    3141 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:49.125838    3141 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:49.125847    3141 main.go:141] libmachine: Making call to close connection to plugin binary
2024/04/22 03:49:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-984000 image ls --format json --alsologtostderr:
[{"id":"446fbf80c63a7b692ce58261f8f9591c2d3f0e4c9651c34cb86c5c1c3edb74b9","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-984000"],"size":"1240000"},{"id":"49378a7c06404c5d1b7fde5bd2e27ad48ea7d4e377a8a9e767c29989cf66d1b9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-984000"],"size":"30"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-984000"],"size":"32900000"},{"id":"82e4c8a
736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":
[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"}
,{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-984000 image ls --format json --alsologtostderr:
I0422 03:49:48.861103    3137 out.go:291] Setting OutFile to fd 1 ...
I0422 03:49:48.861387    3137 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:48.861393    3137 out.go:304] Setting ErrFile to fd 2...
I0422 03:49:48.861398    3137 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:48.862022    3137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
I0422 03:49:48.863075    3137 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:48.863185    3137 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:48.863591    3137 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:48.863648    3137 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:48.872531    3137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50671
I0422 03:49:48.872949    3137 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:48.873409    3137 main.go:141] libmachine: Using API Version  1
I0422 03:49:48.873419    3137 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:48.873719    3137 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:48.873867    3137 main.go:141] libmachine: (functional-984000) Calling .GetState
I0422 03:49:48.873970    3137 main.go:141] libmachine: (functional-984000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 03:49:48.874056    3137 main.go:141] libmachine: (functional-984000) DBG | hyperkit pid from json: 2291
I0422 03:49:48.875524    3137 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:48.875548    3137 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:48.884232    3137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50673
I0422 03:49:48.884573    3137 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:48.884952    3137 main.go:141] libmachine: Using API Version  1
I0422 03:49:48.884972    3137 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:48.885225    3137 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:48.885357    3137 main.go:141] libmachine: (functional-984000) Calling .DriverName
I0422 03:49:48.885517    3137 ssh_runner.go:195] Run: systemctl --version
I0422 03:49:48.885534    3137 main.go:141] libmachine: (functional-984000) Calling .GetSSHHostname
I0422 03:49:48.885629    3137 main.go:141] libmachine: (functional-984000) Calling .GetSSHPort
I0422 03:49:48.885715    3137 main.go:141] libmachine: (functional-984000) Calling .GetSSHKeyPath
I0422 03:49:48.885799    3137 main.go:141] libmachine: (functional-984000) Calling .GetSSHUsername
I0422 03:49:48.885889    3137 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/functional-984000/id_rsa Username:docker}
I0422 03:49:48.921265    3137 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0422 03:49:48.951154    3137 main.go:141] libmachine: Making call to close driver server
I0422 03:49:48.951167    3137 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:48.951359    3137 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:48.951413    3137 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:48.951430    3137 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 03:49:48.951441    3137 main.go:141] libmachine: Making call to close driver server
I0422 03:49:48.951466    3137 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:48.951679    3137 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:48.951694    3137 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 03:49:48.951694    3137 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-984000 image ls --format yaml --alsologtostderr:
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-984000
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 49378a7c06404c5d1b7fde5bd2e27ad48ea7d4e377a8a9e767c29989cf66d1b9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-984000
size: "30"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-984000 image ls --format yaml --alsologtostderr:
I0422 03:49:46.373982    3120 out.go:291] Setting OutFile to fd 1 ...
I0422 03:49:46.374268    3120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:46.374274    3120 out.go:304] Setting ErrFile to fd 2...
I0422 03:49:46.374278    3120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:46.374485    3120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
I0422 03:49:46.375132    3120 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:46.375228    3120 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:46.375590    3120 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:46.375653    3120 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:46.384497    3120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50648
I0422 03:49:46.384927    3120 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:46.385389    3120 main.go:141] libmachine: Using API Version  1
I0422 03:49:46.385403    3120 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:46.385651    3120 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:46.385795    3120 main.go:141] libmachine: (functional-984000) Calling .GetState
I0422 03:49:46.385907    3120 main.go:141] libmachine: (functional-984000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 03:49:46.385970    3120 main.go:141] libmachine: (functional-984000) DBG | hyperkit pid from json: 2291
I0422 03:49:46.387332    3120 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:46.387356    3120 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:46.396899    3120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50650
I0422 03:49:46.397391    3120 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:46.397786    3120 main.go:141] libmachine: Using API Version  1
I0422 03:49:46.397802    3120 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:46.398079    3120 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:46.398205    3120 main.go:141] libmachine: (functional-984000) Calling .DriverName
I0422 03:49:46.398410    3120 ssh_runner.go:195] Run: systemctl --version
I0422 03:49:46.398431    3120 main.go:141] libmachine: (functional-984000) Calling .GetSSHHostname
I0422 03:49:46.398571    3120 main.go:141] libmachine: (functional-984000) Calling .GetSSHPort
I0422 03:49:46.398663    3120 main.go:141] libmachine: (functional-984000) Calling .GetSSHKeyPath
I0422 03:49:46.398761    3120 main.go:141] libmachine: (functional-984000) Calling .GetSSHUsername
I0422 03:49:46.398885    3120 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/functional-984000/id_rsa Username:docker}
I0422 03:49:46.437338    3120 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0422 03:49:46.462051    3120 main.go:141] libmachine: Making call to close driver server
I0422 03:49:46.462061    3120 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:46.462203    3120 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:46.462210    3120 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 03:49:46.462217    3120 main.go:141] libmachine: Making call to close driver server
I0422 03:49:46.462222    3120 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:46.462225    3120 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:46.462368    3120 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:46.462371    3120 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:46.462391    3120 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 ssh pgrep buildkitd: exit status 1 (156.276706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image build -t localhost/my-image:functional-984000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 image build -t localhost/my-image:functional-984000 testdata/build --alsologtostderr: (1.978937884s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-984000 image build -t localhost/my-image:functional-984000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in cdb49cf8c2d0
---> Removed intermediate container cdb49cf8c2d0
---> 573b0d24c497
Step 3/3 : ADD content.txt /
---> 446fbf80c63a
Successfully built 446fbf80c63a
Successfully tagged localhost/my-image:functional-984000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-984000 image build -t localhost/my-image:functional-984000 testdata/build --alsologtostderr:
I0422 03:49:46.713297    3129 out.go:291] Setting OutFile to fd 1 ...
I0422 03:49:46.713724    3129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:46.713731    3129 out.go:304] Setting ErrFile to fd 2...
I0422 03:49:46.713735    3129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 03:49:46.713939    3129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
I0422 03:49:46.714664    3129 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:46.715721    3129 config.go:182] Loaded profile config "functional-984000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 03:49:46.716151    3129 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:46.716188    3129 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:46.725435    3129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50660
I0422 03:49:46.725923    3129 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:46.726383    3129 main.go:141] libmachine: Using API Version  1
I0422 03:49:46.726396    3129 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:46.726617    3129 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:46.726754    3129 main.go:141] libmachine: (functional-984000) Calling .GetState
I0422 03:49:46.726901    3129 main.go:141] libmachine: (functional-984000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0422 03:49:46.726970    3129 main.go:141] libmachine: (functional-984000) DBG | hyperkit pid from json: 2291
I0422 03:49:46.728414    3129 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0422 03:49:46.728437    3129 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0422 03:49:46.737684    3129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50662
I0422 03:49:46.738191    3129 main.go:141] libmachine: () Calling .GetVersion
I0422 03:49:46.738563    3129 main.go:141] libmachine: Using API Version  1
I0422 03:49:46.738574    3129 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 03:49:46.738939    3129 main.go:141] libmachine: () Calling .GetMachineName
I0422 03:49:46.739176    3129 main.go:141] libmachine: (functional-984000) Calling .DriverName
I0422 03:49:46.739356    3129 ssh_runner.go:195] Run: systemctl --version
I0422 03:49:46.739375    3129 main.go:141] libmachine: (functional-984000) Calling .GetSSHHostname
I0422 03:49:46.739486    3129 main.go:141] libmachine: (functional-984000) Calling .GetSSHPort
I0422 03:49:46.739589    3129 main.go:141] libmachine: (functional-984000) Calling .GetSSHKeyPath
I0422 03:49:46.739678    3129 main.go:141] libmachine: (functional-984000) Calling .GetSSHUsername
I0422 03:49:46.739776    3129 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/functional-984000/id_rsa Username:docker}
I0422 03:49:46.778229    3129 build_images.go:161] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.435753013.tar
I0422 03:49:46.778304    3129 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0422 03:49:46.795503    3129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.435753013.tar
I0422 03:49:46.806652    3129 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.435753013.tar: stat -c "%s %y" /var/lib/minikube/build/build.435753013.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.435753013.tar': No such file or directory
I0422 03:49:46.806688    3129 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.435753013.tar --> /var/lib/minikube/build/build.435753013.tar (3072 bytes)
I0422 03:49:46.852197    3129 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.435753013
I0422 03:49:46.868534    3129 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.435753013 -xf /var/lib/minikube/build/build.435753013.tar
I0422 03:49:46.881785    3129 docker.go:360] Building image: /var/lib/minikube/build/build.435753013
I0422 03:49:46.881891    3129 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-984000 /var/lib/minikube/build/build.435753013
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0422 03:49:48.577448    3129 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-984000 /var/lib/minikube/build/build.435753013: (1.695516045s)
I0422 03:49:48.577514    3129 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.435753013
I0422 03:49:48.586306    3129 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.435753013.tar
I0422 03:49:48.595162    3129 build_images.go:217] Built localhost/my-image:functional-984000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.435753013.tar
I0422 03:49:48.595187    3129 build_images.go:133] succeeded building to: functional-984000
I0422 03:49:48.595191    3129 build_images.go:134] failed building to: 
I0422 03:49:48.595210    3129 main.go:141] libmachine: Making call to close driver server
I0422 03:49:48.595217    3129 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:48.595409    3129 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:48.595421    3129 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 03:49:48.595431    3129 main.go:141] libmachine: Making call to close driver server
I0422 03:49:48.595430    3129 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:48.595438    3129 main.go:141] libmachine: (functional-984000) Calling .Close
I0422 03:49:48.595561    3129 main.go:141] libmachine: (functional-984000) DBG | Closing plugin on server side
I0422 03:49:48.595588    3129 main.go:141] libmachine: Successfully made call to close driver server
I0422 03:49:48.595596    3129 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.365012609s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-984000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-984000 docker-env) && out/minikube-darwin-amd64 status -p functional-984000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-984000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image load --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 image load --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr: (4.081067475s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image load --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 image load --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr: (2.312417366s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.906446238s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-984000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image load --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 image load --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr: (3.591532194s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image save gcr.io/google-containers/addon-resizer:functional-984000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 image save gcr.io/google-containers/addon-resizer:functional-984000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.423741118s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image rm gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.261976553s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-984000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 image save --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-984000 image save --daemon gcr.io/google-containers/addon-resizer:functional-984000 --alsologtostderr: (1.223190502s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-984000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-984000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-984000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-6fvhc" [29f4ecf3-667d-4db6-973c-d7dd5c9fd16a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-6fvhc" [29f4ecf3-667d-4db6-973c-d7dd5c9fd16a] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003910102s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-984000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-984000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-984000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2812: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-984000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-984000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-984000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7103c8d1-413f-4b31-a60c-10692db8681c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7103c8d1-413f-4b31-a60c-10692db8681c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003232116s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 service list -o json
functional_test.go:1490: Took "403.799552ms" to run "out/minikube-darwin-amd64 -p functional-984000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.5:31674
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.5:31674
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.87.26 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-984000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "220.782619ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "89.599889ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "227.13941ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "90.054065ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3768046312/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713782971932412000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3768046312/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713782971932412000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3768046312/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713782971932412000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3768046312/001/test-1713782971932412000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (170.668649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 22 10:49 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 22 10:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 22 10:49 test-1713782971932412000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh cat /mount-9p/test-1713782971932412000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-984000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [85d9f172-25d7-464f-8eb3-14ffdc107217] Pending
helpers_test.go:344: "busybox-mount" [85d9f172-25d7-464f-8eb3-14ffdc107217] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [85d9f172-25d7-464f-8eb3-14ffdc107217] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [85d9f172-25d7-464f-8eb3-14ffdc107217] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003350179s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-984000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3768046312/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1829389860/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (174.50691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1829389860/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 ssh "sudo umount -f /mount-9p": exit status 1 (136.622528ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-984000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1829389860/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2248436996/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2248436996/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2248436996/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T" /mount1: exit status 1 (178.102025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-984000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-984000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2248436996/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2248436996/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-984000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2248436996/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-984000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-984000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-984000
E0422 03:49:53.121328    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (439.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-069000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0422 03:50:20.820928    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:53:44.177915    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:44.183103    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:44.193983    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:44.214126    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:44.255297    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:44.335448    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:44.497272    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:44.817369    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:45.458899    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:46.739115    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:49.299312    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:53:54.419555    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:54:04.659934    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:54:25.141885    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:54:53.126715    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 03:55:06.102748    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:56:28.024168    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-069000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (7m18.832232094s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (439.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-069000 -- rollout status deployment/busybox: (3.058692469s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-29gs9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-kw8dj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-z6bs2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-29gs9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-kw8dj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-z6bs2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-29gs9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-kw8dj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-z6bs2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-29gs9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-29gs9 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-kw8dj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-kw8dj -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-z6bs2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-069000 -- exec busybox-fc5497c4f-z6bs2 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-069000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-069000 -v=7 --alsologtostderr: (42.83947531s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-069000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-069000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (227.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0422 03:58:44.183703    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:59:11.866905    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 03:59:53.131147    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 04:01:16.368186    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (3m47.957674512s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (227.96s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-859000 --driver=hyperkit 
E0422 04:23:44.310894    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-859000 --driver=hyperkit : (40.004827605s)
--- PASS: TestImageBuild/serial/Setup (40.00s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-859000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-859000: (1.230121045s)
--- PASS: TestImageBuild/serial/NormalBuild (1.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-859000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-859000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.26s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-859000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.25s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-857000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0422 04:24:53.256675    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-857000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m23.121311924s)
--- PASS: TestJSONOutput/start/Command (83.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-857000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-857000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-857000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-857000 --output=json --user=testUser: (8.323034279s)
--- PASS: TestJSONOutput/stop/Command (8.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-297000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-297000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (385.666512ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1ff873b1-78ad-4cc6-9415-ce9629e0a1a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-297000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7201fd12-aa59-40ea-8590-6b461053c7c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18711"}}
	{"specversion":"1.0","id":"dabe87f6-0c2c-451d-91db-673ab8a9e107","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig"}}
	{"specversion":"1.0","id":"f5cb7670-36cc-4236-bd8d-a77d243ce5fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"9ac76095-1548-431c-ba91-16ec07fd22df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6e89fba5-4726-4e26-b91c-53cd0f03abaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube"}}
	{"specversion":"1.0","id":"3c792d67-fca0-4841-a501-16661e26fe84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7d21c574-e202-46df-b4ee-cd67b997428c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-297000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (91.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-899000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-899000 --driver=hyperkit : (40.587012101s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-901000 --driver=hyperkit 
E0422 04:26:47.357009    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-901000 --driver=hyperkit : (39.771868811s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-899000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-901000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-901000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-901000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-901000: (5.28293783s)
helpers_test.go:175: Cleaning up "first-899000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-899000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-899000: (5.300334614s)
--- PASS: TestMinikubeProfile (91.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-829000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-829000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.648268258s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-829000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-829000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-843000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-843000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (17.11447466s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-829000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-829000 --alsologtostderr -v=5: (2.391302168s)
--- PASS: TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-843000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-843000: (2.388494824s)
--- PASS: TestMountStart/serial/Stop (2.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-843000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-843000: (19.297972716s)
--- PASS: TestMountStart/serial/RestartStopped (20.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (210.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0422 04:28:44.312185    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 04:29:53.258306    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-449000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (3m30.341464374s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (210.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-449000 -- rollout status deployment/busybox: (3.176087901s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-lr9sv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-xzgp2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-lr9sv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-xzgp2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-lr9sv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-xzgp2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-lr9sv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-lr9sv -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-xzgp2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-449000 -- exec busybox-fc5497c4f-xzgp2 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-449000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-449000 -v 3 --alsologtostderr: (34.882112517s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-449000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp testdata/cp-test.txt multinode-449000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile25091067/001/cp-test_multinode-449000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000:/home/docker/cp-test.txt multinode-449000-m02:/home/docker/cp-test_multinode-449000_multinode-449000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m02 "sudo cat /home/docker/cp-test_multinode-449000_multinode-449000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000:/home/docker/cp-test.txt multinode-449000-m03:/home/docker/cp-test_multinode-449000_multinode-449000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m03 "sudo cat /home/docker/cp-test_multinode-449000_multinode-449000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp testdata/cp-test.txt multinode-449000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile25091067/001/cp-test_multinode-449000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000-m02:/home/docker/cp-test.txt multinode-449000:/home/docker/cp-test_multinode-449000-m02_multinode-449000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000 "sudo cat /home/docker/cp-test_multinode-449000-m02_multinode-449000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000-m02:/home/docker/cp-test.txt multinode-449000-m03:/home/docker/cp-test_multinode-449000-m02_multinode-449000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m03 "sudo cat /home/docker/cp-test_multinode-449000-m02_multinode-449000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp testdata/cp-test.txt multinode-449000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile25091067/001/cp-test_multinode-449000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000-m03:/home/docker/cp-test.txt multinode-449000:/home/docker/cp-test_multinode-449000-m03_multinode-449000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000 "sudo cat /home/docker/cp-test_multinode-449000-m03_multinode-449000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 cp multinode-449000-m03:/home/docker/cp-test.txt multinode-449000-m02:/home/docker/cp-test_multinode-449000-m03_multinode-449000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 ssh -n multinode-449000-m02 "sudo cat /home/docker/cp-test_multinode-449000-m03_multinode-449000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 node stop m03: (2.343736854s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status: exit status 7 (258.075363ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-449000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-449000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr: exit status 7 (255.816834ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-449000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-449000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 04:32:58.499881    6171 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:32:58.500178    6171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:32:58.500184    6171 out.go:304] Setting ErrFile to fd 2...
	I0422 04:32:58.500188    6171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:32:58.500391    6171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:32:58.500571    6171 out.go:298] Setting JSON to false
	I0422 04:32:58.500596    6171 mustload.go:65] Loading cluster: multinode-449000
	I0422 04:32:58.500631    6171 notify.go:220] Checking for updates...
	I0422 04:32:58.501648    6171 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:32:58.501782    6171 status.go:255] checking status of multinode-449000 ...
	I0422 04:32:58.502318    6171 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:32:58.502361    6171 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:32:58.511148    6171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51951
	I0422 04:32:58.511452    6171 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:32:58.511845    6171 main.go:141] libmachine: Using API Version  1
	I0422 04:32:58.511854    6171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:32:58.512097    6171 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:32:58.512213    6171 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I0422 04:32:58.512294    6171 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:32:58.512368    6171 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 5697
	I0422 04:32:58.513622    6171 status.go:330] multinode-449000 host status = "Running" (err=<nil>)
	I0422 04:32:58.513641    6171 host.go:66] Checking if "multinode-449000" exists ...
	I0422 04:32:58.513886    6171 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:32:58.513909    6171 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:32:58.522508    6171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51953
	I0422 04:32:58.522836    6171 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:32:58.523162    6171 main.go:141] libmachine: Using API Version  1
	I0422 04:32:58.523208    6171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:32:58.523471    6171 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:32:58.523598    6171 main.go:141] libmachine: (multinode-449000) Calling .GetIP
	I0422 04:32:58.523677    6171 host.go:66] Checking if "multinode-449000" exists ...
	I0422 04:32:58.523939    6171 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:32:58.523967    6171 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:32:58.532666    6171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51955
	I0422 04:32:58.533027    6171 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:32:58.533401    6171 main.go:141] libmachine: Using API Version  1
	I0422 04:32:58.533418    6171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:32:58.533618    6171 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:32:58.533728    6171 main.go:141] libmachine: (multinode-449000) Calling .DriverName
	I0422 04:32:58.533894    6171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:32:58.533912    6171 main.go:141] libmachine: (multinode-449000) Calling .GetSSHHostname
	I0422 04:32:58.533984    6171 main.go:141] libmachine: (multinode-449000) Calling .GetSSHPort
	I0422 04:32:58.534057    6171 main.go:141] libmachine: (multinode-449000) Calling .GetSSHKeyPath
	I0422 04:32:58.534124    6171 main.go:141] libmachine: (multinode-449000) Calling .GetSSHUsername
	I0422 04:32:58.534203    6171 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000/id_rsa Username:docker}
	I0422 04:32:58.563312    6171 ssh_runner.go:195] Run: systemctl --version
	I0422 04:32:58.567814    6171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 04:32:58.579666    6171 kubeconfig.go:125] found "multinode-449000" server: "https://192.169.0.16:8443"
	I0422 04:32:58.579692    6171 api_server.go:166] Checking apiserver status ...
	I0422 04:32:58.579728    6171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 04:32:58.590828    6171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1827/cgroup
	W0422 04:32:58.598446    6171 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1827/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 04:32:58.598498    6171 ssh_runner.go:195] Run: ls
	I0422 04:32:58.601645    6171 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0422 04:32:58.604561    6171 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0422 04:32:58.604572    6171 status.go:422] multinode-449000 apiserver status = Running (err=<nil>)
	I0422 04:32:58.604587    6171 status.go:257] multinode-449000 status: &{Name:multinode-449000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 04:32:58.604598    6171 status.go:255] checking status of multinode-449000-m02 ...
	I0422 04:32:58.604825    6171 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:32:58.604847    6171 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:32:58.613639    6171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51959
	I0422 04:32:58.613973    6171 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:32:58.614336    6171 main.go:141] libmachine: Using API Version  1
	I0422 04:32:58.614353    6171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:32:58.614557    6171 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:32:58.614674    6171 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I0422 04:32:58.614751    6171 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:32:58.614825    6171 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 5740
	I0422 04:32:58.616037    6171 status.go:330] multinode-449000-m02 host status = "Running" (err=<nil>)
	I0422 04:32:58.616046    6171 host.go:66] Checking if "multinode-449000-m02" exists ...
	I0422 04:32:58.616298    6171 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:32:58.616320    6171 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:32:58.624910    6171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51961
	I0422 04:32:58.625239    6171 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:32:58.625595    6171 main.go:141] libmachine: Using API Version  1
	I0422 04:32:58.625633    6171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:32:58.625830    6171 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:32:58.625935    6171 main.go:141] libmachine: (multinode-449000-m02) Calling .GetIP
	I0422 04:32:58.626014    6171 host.go:66] Checking if "multinode-449000-m02" exists ...
	I0422 04:32:58.626261    6171 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:32:58.626281    6171 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:32:58.634956    6171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51963
	I0422 04:32:58.635302    6171 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:32:58.635637    6171 main.go:141] libmachine: Using API Version  1
	I0422 04:32:58.635655    6171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:32:58.635862    6171 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:32:58.635972    6171 main.go:141] libmachine: (multinode-449000-m02) Calling .DriverName
	I0422 04:32:58.636093    6171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 04:32:58.636103    6171 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHHostname
	I0422 04:32:58.636181    6171 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHPort
	I0422 04:32:58.636261    6171 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHKeyPath
	I0422 04:32:58.636342    6171 main.go:141] libmachine: (multinode-449000-m02) Calling .GetSSHUsername
	I0422 04:32:58.636417    6171 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18711-1033/.minikube/machines/multinode-449000-m02/id_rsa Username:docker}
	I0422 04:32:58.670670    6171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 04:32:58.681620    6171 status.go:257] multinode-449000-m02 status: &{Name:multinode-449000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0422 04:32:58.681642    6171 status.go:255] checking status of multinode-449000-m03 ...
	I0422 04:32:58.681932    6171 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:32:58.681957    6171 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:32:58.690881    6171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51966
	I0422 04:32:58.691232    6171 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:32:58.691591    6171 main.go:141] libmachine: Using API Version  1
	I0422 04:32:58.691607    6171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:32:58.691823    6171 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:32:58.691946    6171 main.go:141] libmachine: (multinode-449000-m03) Calling .GetState
	I0422 04:32:58.692026    6171 main.go:141] libmachine: (multinode-449000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:32:58.692105    6171 main.go:141] libmachine: (multinode-449000-m03) DBG | hyperkit pid from json: 5944
	I0422 04:32:58.693359    6171 main.go:141] libmachine: (multinode-449000-m03) DBG | hyperkit pid 5944 missing from process table
	I0422 04:32:58.693379    6171 status.go:330] multinode-449000-m03 host status = "Stopped" (err=<nil>)
	I0422 04:32:58.693386    6171 status.go:343] host is not running, skipping remaining checks
	I0422 04:32:58.693393    6171 status.go:257] multinode-449000-m03 status: &{Name:multinode-449000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.86s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 node start m03 -v=7 --alsologtostderr: (31.218673407s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (259.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-449000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-449000
E0422 04:33:44.313625    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-449000: (18.811294652s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr
E0422 04:34:36.322638    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 04:34:53.260082    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-449000 --wait=true -v=8 --alsologtostderr: (4m0.655468196s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-449000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (259.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 node delete m03: (3.073201014s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-449000 stop: (16.639769146s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status: exit status 7 (97.395313ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-449000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-449000 status --alsologtostderr: exit status 7 (85.378912ms)

                                                
                                                
-- stdout --
	multinode-449000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-449000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 04:38:10.162253    6412 out.go:291] Setting OutFile to fd 1 ...
	I0422 04:38:10.162874    6412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:38:10.162882    6412 out.go:304] Setting ErrFile to fd 2...
	I0422 04:38:10.162887    6412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 04:38:10.163279    6412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18711-1033/.minikube/bin
	I0422 04:38:10.163687    6412 out.go:298] Setting JSON to false
	I0422 04:38:10.163712    6412 mustload.go:65] Loading cluster: multinode-449000
	I0422 04:38:10.163757    6412 notify.go:220] Checking for updates...
	I0422 04:38:10.163992    6412 config.go:182] Loaded profile config "multinode-449000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 04:38:10.164008    6412 status.go:255] checking status of multinode-449000 ...
	I0422 04:38:10.164349    6412 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.164393    6412 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.172993    6412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52197
	I0422 04:38:10.173360    6412 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.173784    6412 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.173801    6412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.174052    6412 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.174194    6412 main.go:141] libmachine: (multinode-449000) Calling .GetState
	I0422 04:38:10.174288    6412 main.go:141] libmachine: (multinode-449000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.174354    6412 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid from json: 6245
	I0422 04:38:10.175324    6412 main.go:141] libmachine: (multinode-449000) DBG | hyperkit pid 6245 missing from process table
	I0422 04:38:10.175357    6412 status.go:330] multinode-449000 host status = "Stopped" (err=<nil>)
	I0422 04:38:10.175363    6412 status.go:343] host is not running, skipping remaining checks
	I0422 04:38:10.175370    6412 status.go:257] multinode-449000 status: &{Name:multinode-449000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 04:38:10.175396    6412 status.go:255] checking status of multinode-449000-m02 ...
	I0422 04:38:10.175629    6412 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0422 04:38:10.175648    6412 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0422 04:38:10.183881    6412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52199
	I0422 04:38:10.184221    6412 main.go:141] libmachine: () Calling .GetVersion
	I0422 04:38:10.184573    6412 main.go:141] libmachine: Using API Version  1
	I0422 04:38:10.184595    6412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 04:38:10.184829    6412 main.go:141] libmachine: () Calling .GetMachineName
	I0422 04:38:10.184969    6412 main.go:141] libmachine: (multinode-449000-m02) Calling .GetState
	I0422 04:38:10.185064    6412 main.go:141] libmachine: (multinode-449000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0422 04:38:10.185130    6412 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid from json: 6310
	I0422 04:38:10.186081    6412 main.go:141] libmachine: (multinode-449000-m02) DBG | hyperkit pid 6310 missing from process table
	I0422 04:38:10.186112    6412 status.go:330] multinode-449000-m02 host status = "Stopped" (err=<nil>)
	I0422 04:38:10.186116    6412 status.go:343] host is not running, skipping remaining checks
	I0422 04:38:10.186123    6412 status.go:257] multinode-449000-m02 status: &{Name:multinode-449000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-449000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-449000-m02 --driver=hyperkit : exit status 14 (533.362914ms)

                                                
                                                
-- stdout --
	* [multinode-449000-m02] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-449000-m02' is duplicated with machine name 'multinode-449000-m02' in profile 'multinode-449000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-449000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-449000-m03 --driver=hyperkit : (37.181969393s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-449000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-449000: exit status 80 (274.995363ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-449000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-449000-m03 already exists in multinode-449000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-449000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-449000-m03: (7.609444057s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.73s)

                                                
                                    
x
+
TestScheduledStopUnix (109.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-067000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-067000 --memory=2048 --driver=hyperkit : (37.687717952s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-067000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-067000 -n scheduled-stop-067000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-067000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-067000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-067000 -n scheduled-stop-067000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-067000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-067000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-067000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-067000: exit status 7 (79.214345ms)

                                                
                                                
-- stdout --
	scheduled-stop-067000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-067000 -n scheduled-stop-067000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-067000 -n scheduled-stop-067000: exit status 7 (74.581655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-067000
--- PASS: TestScheduledStopUnix (109.30s)

                                                
                                    
x
+
TestSkaffold (117.14s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3868285924 version
skaffold_test.go:59: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3868285924 version: (1.462428699s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-456000 --memory=2600 --driver=hyperkit 
E0422 04:48:44.415897    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-456000 --memory=2600 --driver=hyperkit : (39.058821803s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3868285924 run --minikube-profile skaffold-456000 --kube-context skaffold-456000 --status-check=true --port-forward=false --interactive=false
E0422 04:49:53.364425    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3868285924 run --minikube-profile skaffold-456000 --kube-context skaffold-456000 --status-check=true --port-forward=false --interactive=false: (59.156752449s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-558fcd5556-b6fwm" [6d7a0576-8b5e-4265-8616-5309bca86f86] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004483582s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5d4c68577-7pldj" [e180d652-af6f-4c7a-a776-85ede93030b5] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005078486s
helpers_test.go:175: Cleaning up "skaffold-456000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-456000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-456000: (5.274695147s)
--- PASS: TestSkaffold (117.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (99.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.3259364945 start -p running-upgrade-965000 --memory=2200 --vm-driver=hyperkit 
E0422 04:53:44.424210    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.3259364945 start -p running-upgrade-965000 --memory=2200 --vm-driver=hyperkit : (51.294993968s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-965000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-965000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (41.390776772s)
helpers_test.go:175: Cleaning up "running-upgrade-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-965000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-965000: (5.293801246s)
--- PASS: TestRunningBinaryUpgrade (99.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (121.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0422 04:55:06.182859    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:06.188669    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:06.199744    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:06.221730    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:06.263226    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:06.343922    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:06.504579    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:06.825153    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:07.466033    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:08.747154    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:11.307365    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:16.429261    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:26.611411    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 04:55:47.091609    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (50.288953091s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-547000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-547000: (8.376975451s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-547000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-547000 status --format={{.Host}}: exit status 7 (75.057118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
E0422 04:56:28.052872    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (34.603308926s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-547000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (577.047086ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-547000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-547000
	    minikube start -p kubernetes-upgrade-547000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5470002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-547000 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-547000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (24.184133056s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-547000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-547000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-547000: (3.492464497s)
--- PASS: TestKubernetesUpgrade (121.65s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.04s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18711
- KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3886019400/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3886019400/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3886019400/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3886019400/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.56s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18711
- KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current961878601/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current961878601/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current961878601/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current961878601/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.3433554086 start -p stopped-upgrade-199000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.3433554086 start -p stopped-upgrade-199000 --memory=2200 --vm-driver=hyperkit : (43.882972184s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.3433554086 -p stopped-upgrade-199000 stop
E0422 04:57:49.974599    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.26.0.3433554086 -p stopped-upgrade-199000 stop: (8.263246875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-199000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-199000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (35.503339597s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.65s)

                                                
                                    
x
+
TestPause/serial/Start (91.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-337000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-337000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m31.386883884s)
--- PASS: TestPause/serial/Start (91.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-199000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-199000: (2.932651629s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-549000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-549000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (508.802595ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-549000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18711-1033/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18711-1033/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-549000 --driver=hyperkit 
E0422 04:58:44.369547    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-549000 --driver=hyperkit : (39.268420138s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-549000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-549000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-549000 --no-kubernetes --driver=hyperkit : (14.911375584s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-549000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-549000 status -o json: exit status 2 (149.275897ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-549000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-549000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-549000: (2.436072577s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-549000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-549000 --no-kubernetes --driver=hyperkit : (20.910123555s)
--- PASS: TestNoKubernetes/serial/Start (20.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-337000 --alsologtostderr -v=1 --driver=hyperkit 
E0422 04:59:53.316196    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-337000 --alsologtostderr -v=1 --driver=hyperkit : (45.81562092s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-549000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-549000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (137.936809ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
E0422 05:00:06.125151    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 05:00:07.416944    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (17.639603979s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-549000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-549000: (2.446677139s)
--- PASS: TestNoKubernetes/serial/Stop (2.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-549000 --driver=hyperkit 
E0422 05:00:33.815988    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-549000 --driver=hyperkit : (19.333786485s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.33s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-337000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-337000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-337000 --output=json --layout=cluster: exit status 2 (169.961616ms)

                                                
                                                
-- stdout --
	{"Name":"pause-337000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-337000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.17s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-337000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-337000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-337000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-337000 --alsologtostderr -v=5: (5.812236991s)
--- PASS: TestPause/serial/DeletePaused (5.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-549000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-549000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.749352ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (181s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (3m0.999430046s)
--- PASS: TestNetworkPlugins/group/auto/Start (181.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (88.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m28.805314451s)
--- PASS: TestNetworkPlugins/group/calico/Start (88.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gkqhx" [c09b918a-c44c-4f4c-aeeb-8ba19909dc30] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005481879s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-v7wff" [698220ac-1477-4d10-93cb-faf982029317] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-v7wff" [698220ac-1477-4d10-93cb-faf982029317] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004385985s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (178.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (2m58.837188674s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (178.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sdsw9" [79cfe945-7ef2-44d4-8109-8ba1fa5e05ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0422 05:03:44.370931    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-sdsw9" [79cfe945-7ef2-44d4-8109-8ba1fa5e05ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004549622s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0422 05:04:53.318998    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 05:05:06.125811    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m20.861956026s)
--- PASS: TestNetworkPlugins/group/false/Start (80.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c7tss" [4dfaf2c6-32b3-432c-adff-734ee666d7d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-c7tss" [4dfaf2c6-32b3-432c-adff-734ee666d7d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003805268s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-55lr2" [be4d87e8-e9f8-448f-8357-73f59bc9129e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-55lr2" [be4d87e8-e9f8-448f-8357-73f59bc9129e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00463011s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m3.018598574s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m1.704707608s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gqghw" [842bfd5d-ad5d-4cae-99b2-24e2cce42ec9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003929957s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w79g9" [93cba5ee-2398-4488-a380-f374bc311fb3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0422 05:07:12.334541    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:12.339868    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:12.350612    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:12.372170    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:12.413266    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:12.494996    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:12.656416    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:12.977805    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:13.619030    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:07:14.899161    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-w79g9" [93cba5ee-2398-4488-a380-f374bc311fb3] Running
E0422 05:07:17.459331    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005004302s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q8qgt" [b604c76c-eed8-4766-b3d6-0820bd748f1f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002922129s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d8xwh" [3e6bdf88-a7a2-446e-98ea-604e27d3637a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-d8xwh" [3e6bdf88-a7a2-446e-98ea-604e27d3637a] Running
E0422 05:07:32.821222    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004921521s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (55.241781807s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0422 05:07:56.381913    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 05:08:34.263523    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (56.627008716s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fx7bz" [36b81993-7f01-4be9-8f5e-ea1fa4255a1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fx7bz" [36b81993-7f01-4be9-8f5e-ea1fa4255a1e] Running
E0422 05:08:43.668960    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:43.674545    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:43.685024    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:43.707118    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:43.747350    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:43.828163    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:43.988445    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:44.308563    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:44.373209    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 05:08:44.948726    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:08:46.228926    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003120715s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ggvnw" [cc56e6b5-ef65-4edf-afd0-6c2e5a95ddb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0422 05:08:53.910775    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-ggvnw" [cc56e6b5-ef65-4edf-afd0-6c2e5a95ddb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004102161s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (84.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0422 05:09:04.151414    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-115000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m24.183139661s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (84.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (166.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-647000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0422 05:09:24.632454    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:09:53.318456    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 05:09:56.184482    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:10:05.593459    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:10:06.127612    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-647000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m46.424879168s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (166.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-115000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-115000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xnfz7" [783afd07-6463-4ad6-a11d-e3198e103683] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0422 05:10:32.486630    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:32.491719    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:32.501886    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:32.522452    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:32.563181    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:32.644204    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:32.804264    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:33.124539    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-xnfz7" [783afd07-6463-4ad6-a11d-e3198e103683] Running
E0422 05:10:33.765395    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:35.044719    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:10:37.604115    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003457283s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-115000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-115000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0422 05:23:50.909980    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-554000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0
E0422 05:10:57.196669    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
E0422 05:11:07.437377    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
E0422 05:11:13.436165    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:11:27.486959    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:11:27.917412    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
E0422 05:11:29.152428    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-554000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0: (54.106401875s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-554000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d4aeaba7-c54b-4581-a14e-980c7a8b5434] Pending
helpers_test.go:344: "busybox" [d4aeaba7-c54b-4581-a14e-980c7a8b5434] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d4aeaba7-c54b-4581-a14e-980c7a8b5434] Running
E0422 05:11:54.395387    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003790735s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-554000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-554000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-554000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-554000 --alsologtostderr -v=3
E0422 05:12:03.403638    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:03.409123    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:03.419665    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:03.440115    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:03.481352    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:03.563595    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:03.723749    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:04.044517    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:04.684985    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:05.967358    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-554000 --alsologtostderr -v=3: (8.41740974s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-647000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [524d1538-ec35-4e1a-a054-61f4a08bc604] Pending
helpers_test.go:344: "busybox" [524d1538-ec35-4e1a-a054-61f4a08bc604] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0422 05:12:08.527475    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [524d1538-ec35-4e1a-a054-61f4a08bc604] Running
E0422 05:12:12.308141    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:12:13.647633    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002961949s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-647000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-554000 -n no-preload-554000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-554000 -n no-preload-554000: exit status 7 (76.47171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-554000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0422 05:12:08.878125    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (293.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-554000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-554000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0: (4m53.303576131s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-554000 -n no-preload-554000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (293.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-647000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-647000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-647000 --alsologtostderr -v=3
E0422 05:12:18.223479    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:18.228618    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:18.239376    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:18.260620    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:18.300922    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:18.383119    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:18.544057    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:18.866262    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:19.506486    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:20.787142    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:23.348373    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:23.889471    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-647000 --alsologtostderr -v=3: (8.410167044s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-647000 -n old-k8s-version-647000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-647000 -n old-k8s-version-647000: exit status 7 (76.399608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-647000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (392.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-647000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0422 05:12:28.470624    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:38.711788    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:12:39.998417    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:12:44.370661    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:12:59.192414    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:13:16.316464    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:13:25.331469    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:13:30.798743    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
E0422 05:13:35.203539    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:35.209211    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:35.219422    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:35.240791    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:35.281401    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:35.362551    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:35.522742    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:35.844606    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:36.485735    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:37.765858    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:40.153660    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:13:40.326269    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:43.640579    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:13:44.345211    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 05:13:45.447706    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:50.911444    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:50.917150    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:50.927939    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:50.948553    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:50.989049    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:51.070242    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:51.231793    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:51.553835    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:52.194783    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:53.475068    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:13:55.689167    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:13:56.036206    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:14:01.156776    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:14:11.327246    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:14:11.397770    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:14:16.169337    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:14:31.877987    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:14:47.252916    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:14:53.291079    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 05:14:57.129820    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:15:02.074314    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:15:06.099551    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0422 05:15:12.839333    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:15:28.520180    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:28.526283    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:28.538478    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:28.559230    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:28.600291    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:28.680503    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:28.840621    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:29.161774    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:29.802804    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:31.084046    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:32.469793    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:15:33.645124    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:38.765386    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:15:46.950652    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
E0422 05:15:49.005597    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:16:00.156476    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/false-115000/client.crt: no such file or directory
E0422 05:16:09.486531    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
E0422 05:16:14.638599    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/custom-flannel-115000/client.crt: no such file or directory
E0422 05:16:19.049791    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:16:34.760195    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
E0422 05:16:47.392424    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 05:16:50.447883    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-647000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m32.730254482s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-647000 -n old-k8s-version-647000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (392.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7x82b" [442d4627-a22d-4776-ad35-35e1d5cdab3d] Running
E0422 05:17:03.402575    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00467946s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7x82b" [442d4627-a22d-4776-ad35-35e1d5cdab3d] Running
E0422 05:17:12.307083    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00439464s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-554000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-554000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-554000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-554000 -n no-preload-554000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-554000 -n no-preload-554000: exit status 2 (166.092507ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-554000 -n no-preload-554000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-554000 -n no-preload-554000: exit status 2 (163.286026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-554000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-554000 -n no-preload-554000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-554000 -n no-preload-554000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-596000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0
E0422 05:17:31.092711    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:17:45.914533    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:18:12.369139    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kubenet-115000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-596000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0: (54.732661164s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-596000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b12a703-cfd3-42f3-bf5f-3d2dba12195e] Pending
helpers_test.go:344: "busybox" [5b12a703-cfd3-42f3-bf5f-3d2dba12195e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b12a703-cfd3-42f3-bf5f-3d2dba12195e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004392278s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-596000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-596000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-596000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-596000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-596000 --alsologtostderr -v=3: (8.429799353s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-596000 -n embed-certs-596000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-596000 -n embed-certs-596000: exit status 7 (75.424491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-596000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (292.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-596000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0
E0422 05:18:35.203900    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:18:43.639863    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/auto-115000/client.crt: no such file or directory
E0422 05:18:44.343674    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/functional-984000/client.crt: no such file or directory
E0422 05:18:50.910574    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/bridge-115000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-596000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0: (4m51.921666793s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-596000 -n embed-certs-596000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (292.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ngcmx" [e2a9c88b-2d55-4649-aa56-37e55d2c1687] Running
E0422 05:19:02.891694    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00484382s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ngcmx" [e2a9c88b-2d55-4649-aa56-37e55d2c1687] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003672872s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-647000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-647000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-647000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-647000 -n old-k8s-version-647000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-647000 -n old-k8s-version-647000: exit status 2 (165.524755ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-647000 -n old-k8s-version-647000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-647000 -n old-k8s-version-647000: exit status 2 (174.251536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-647000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-647000 -n old-k8s-version-647000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-647000 -n old-k8s-version-647000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-654000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-654000 --alsologtostderr -v=3: (8.423164059s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000: exit status 7 (74.031319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-654000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-654000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0
E0422 05:21:51.306930    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:51.313002    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:51.323408    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:51.345263    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:51.385735    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:51.465882    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:51.625969    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:51.946278    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:52.587801    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:53.870008    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:21:56.431477    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:22:01.552683    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:22:03.401216    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/kindnet-115000/client.crt: no such file or directory
E0422 05:22:07.437285    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:07.442919    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:07.453438    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:07.474793    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:07.515099    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:07.596531    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:07.757748    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:08.079520    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:08.721013    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:10.001186    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:11.793463    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:22:12.306536    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
E0422 05:22:12.561360    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:17.682666    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:18.222109    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/flannel-115000/client.crt: no such file or directory
E0422 05:22:27.924191    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:22:32.273721    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-654000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0: (52.478482518s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-84zhv" [b0463523-99dd-43a3-8dfa-c3d629e9f321] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-84zhv" [b0463523-99dd-43a3-8dfa-c3d629e9f321] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004353066s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-84zhv" [b0463523-99dd-43a3-8dfa-c3d629e9f321] Running
E0422 05:22:48.405820    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004231416s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-654000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-654000 image list --format=json
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-654000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000: exit status 2 (169.63568ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000: exit status 2 (169.693797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-654000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-654000 -n default-k8s-diff-port-654000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-960000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0
E0422 05:23:13.234305    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-960000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0: (52.137857255s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-97n8z" [38d753bb-4ae0-44e5-8cf4-cfebdbe0667e] Running
E0422 05:23:29.366789    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002775079s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-97n8z" [38d753bb-4ae0-44e5-8cf4-cfebdbe0667e] Running
E0422 05:23:35.202763    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/enable-default-cni-115000/client.crt: no such file or directory
E0422 05:23:35.358490    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/calico-115000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003246937s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-596000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-596000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-596000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-596000 -n embed-certs-596000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-596000 -n embed-certs-596000: exit status 2 (179.349663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-596000 -n embed-certs-596000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-596000 -n embed-certs-596000: exit status 2 (178.974053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-596000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-596000 -n embed-certs-596000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-596000 -n embed-certs-596000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-960000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-960000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-960000 --alsologtostderr -v=3: (8.419378816s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-960000 -n newest-cni-960000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-960000 -n newest-cni-960000: exit status 7 (75.187933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-960000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (52.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-960000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0
E0422 05:24:35.156508    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/no-preload-554000/client.crt: no such file or directory
E0422 05:24:36.353528    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
E0422 05:24:51.287466    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/old-k8s-version-647000/client.crt: no such file or directory
E0422 05:24:53.289564    1484 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18711-1033/.minikube/profiles/addons-483000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-960000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0: (52.80283163s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-960000 -n newest-cni-960000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (52.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-960000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-960000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-960000 -n newest-cni-960000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-960000 -n newest-cni-960000: exit status 2 (169.047492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-960000 -n newest-cni-960000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-960000 -n newest-cni-960000: exit status 2 (182.509063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-960000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-960000 -n newest-cni-960000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-960000 -n newest-cni-960000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.83s)

                                                
                                    

Test skip (20/332)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-115000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-115000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-115000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-115000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-115000"

                                                
                                                
----------------------- debugLogs end: cilium-115000 [took: 5.794111835s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-115000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-115000
--- SKIP: TestNetworkPlugins/group/cilium (6.17s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-341000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-341000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard