Test Report: Docker_Linux 15565

                    
                      0deb2b878fee68d58aa080d8e3381e2f3cf3cac2:2023-01-28:27629
                    
                

Test fail (1/308)

Order failed test Duration
205 TestMultiNode/serial/StartAfterStop 149.1
x
+
TestMultiNode/serial/StartAfterStop (149.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 node start m03 --alsologtostderr
E0128 18:37:48.070356   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:37:55.646570   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:38:15.756810   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:38:32.991092   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052675 node start m03 --alsologtostderr: exit status 80 (2m26.010817186s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-052675-m03 in cluster multinode-052675
	* Pulling base image ...
	* Restarting existing docker container for "multinode-052675-m03" ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 18:37:28.768159  140370 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:37:28.768346  140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:37:28.768358  140370 out.go:309] Setting ErrFile to fd 2...
	I0128 18:37:28.768363  140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:37:28.768525  140370 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	I0128 18:37:28.768811  140370 mustload.go:65] Loading cluster: multinode-052675
	I0128 18:37:28.769117  140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:37:28.769503  140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
	W0128 18:37:28.792776  140370 host.go:58] "multinode-052675-m03" host status: Stopped
	I0128 18:37:28.795620  140370 out.go:177] * Starting worker node multinode-052675-m03 in cluster multinode-052675
	I0128 18:37:28.797097  140370 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 18:37:28.798505  140370 out.go:177] * Pulling base image ...
	I0128 18:37:28.799824  140370 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 18:37:28.799871  140370 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 18:37:28.799886  140370 cache.go:57] Caching tarball of preloaded images
	I0128 18:37:28.799923  140370 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 18:37:28.799986  140370 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 18:37:28.799998  140370 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 18:37:28.800148  140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
	I0128 18:37:28.823561  140370 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 18:37:28.823583  140370 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 18:37:28.823602  140370 cache.go:193] Successfully downloaded all kic artifacts
	I0128 18:37:28.823636  140370 start.go:364] acquiring machines lock for multinode-052675-m03: {Name:mk417407859367a958d60a86e439689c454fd088 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 18:37:28.823725  140370 start.go:368] acquired machines lock for "multinode-052675-m03" in 40.859µs
	I0128 18:37:28.823755  140370 start.go:96] Skipping create...Using existing machine configuration
	I0128 18:37:28.823765  140370 fix.go:55] fixHost starting: m03
	I0128 18:37:28.823991  140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
	I0128 18:37:28.851728  140370 fix.go:103] recreateIfNeeded on multinode-052675-m03: state=Stopped err=<nil>
	W0128 18:37:28.851772  140370 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 18:37:28.854195  140370 out.go:177] * Restarting existing docker container for "multinode-052675-m03" ...
	I0128 18:37:28.855947  140370 cli_runner.go:164] Run: docker start multinode-052675-m03
	I0128 18:37:29.217988  140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
	I0128 18:37:29.243480  140370 kic.go:426] container "multinode-052675-m03" state is running.
	I0128 18:37:29.243903  140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
	I0128 18:37:29.270969  140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
	I0128 18:37:29.271203  140370 machine.go:88] provisioning docker machine ...
	I0128 18:37:29.271232  140370 ubuntu.go:169] provisioning hostname "multinode-052675-m03"
	I0128 18:37:29.271277  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:29.294440  140370 main.go:141] libmachine: Using SSH client type: native
	I0128 18:37:29.294650  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0128 18:37:29.294672  140370 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-052675-m03 && echo "multinode-052675-m03" | sudo tee /etc/hostname
	I0128 18:37:29.295321  140370 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41948->127.0.0.1:32867: read: connection reset by peer
	I0128 18:37:32.436714  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675-m03
	
	I0128 18:37:32.436792  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:32.460540  140370 main.go:141] libmachine: Using SSH client type: native
	I0128 18:37:32.460694  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0128 18:37:32.460715  140370 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-052675-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-052675-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 18:37:32.592073  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 18:37:32.592124  140370 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
	I0128 18:37:32.592149  140370 ubuntu.go:177] setting up certificates
	I0128 18:37:32.592156  140370 provision.go:83] configureAuth start
	I0128 18:37:32.592205  140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
	I0128 18:37:32.615257  140370 provision.go:138] copyHostCerts
	I0128 18:37:32.615326  140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
	I0128 18:37:32.615335  140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
	I0128 18:37:32.615398  140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
	I0128 18:37:32.615486  140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
	I0128 18:37:32.615498  140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
	I0128 18:37:32.615524  140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
	I0128 18:37:32.615567  140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
	I0128 18:37:32.615575  140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
	I0128 18:37:32.615594  140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
	I0128 18:37:32.615630  140370 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675-m03]
	I0128 18:37:32.730355  140370 provision.go:172] copyRemoteCerts
	I0128 18:37:32.730428  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 18:37:32.730461  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:32.755868  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
	I0128 18:37:32.848031  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 18:37:32.867603  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0128 18:37:32.885889  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 18:37:32.904961  140370 provision.go:86] duration metric: configureAuth took 312.790194ms
	I0128 18:37:32.904990  140370 ubuntu.go:193] setting minikube options for container-runtime
	I0128 18:37:32.905181  140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:37:32.905241  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:32.930266  140370 main.go:141] libmachine: Using SSH client type: native
	I0128 18:37:32.930415  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0128 18:37:32.930429  140370 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 18:37:33.061366  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 18:37:33.061402  140370 ubuntu.go:71] root file system type: overlay
	I0128 18:37:33.061606  140370 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 18:37:33.061688  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:33.087541  140370 main.go:141] libmachine: Using SSH client type: native
	I0128 18:37:33.087719  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0128 18:37:33.087814  140370 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 18:37:33.230445  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 18:37:33.230514  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:33.256238  140370 main.go:141] libmachine: Using SSH client type: native
	I0128 18:37:33.256411  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0128 18:37:33.256474  140370 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 18:37:33.392286  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 18:37:33.392317  140370 machine.go:91] provisioned docker machine in 4.121098442s
	I0128 18:37:33.392328  140370 start.go:300] post-start starting for "multinode-052675-m03" (driver="docker")
	I0128 18:37:33.392335  140370 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 18:37:33.392399  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 18:37:33.392436  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:33.418787  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
	I0128 18:37:33.512281  140370 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 18:37:33.514993  140370 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 18:37:33.515021  140370 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 18:37:33.515039  140370 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 18:37:33.515047  140370 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 18:37:33.515065  140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
	I0128 18:37:33.515125  140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
	I0128 18:37:33.515207  140370 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
	I0128 18:37:33.515300  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 18:37:33.522350  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
	I0128 18:37:33.541225  140370 start.go:303] post-start completed in 148.881332ms
	I0128 18:37:33.541302  140370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 18:37:33.541341  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:33.567063  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
	I0128 18:37:33.656837  140370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 18:37:33.660572  140370 fix.go:57] fixHost completed within 4.836799887s
	I0128 18:37:33.660596  140370 start.go:83] releasing machines lock for "multinode-052675-m03", held for 4.836857796s
	I0128 18:37:33.660659  140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
	I0128 18:37:33.683972  140370 ssh_runner.go:195] Run: systemctl --version
	I0128 18:37:33.684004  140370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 18:37:33.684023  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:33.684051  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
	I0128 18:37:33.710480  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
	I0128 18:37:33.711899  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
	I0128 18:37:33.800897  140370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 18:37:33.836566  140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 18:37:33.853295  140370 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 18:37:33.853399  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 18:37:33.860387  140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 18:37:33.874296  140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 18:37:33.881997  140370 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0128 18:37:33.882027  140370 start.go:483] detecting cgroup driver to use...
	I0128 18:37:33.882056  140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 18:37:33.882204  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 18:37:33.895781  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 18:37:33.904320  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 18:37:33.912939  140370 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 18:37:33.912987  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 18:37:33.922779  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 18:37:33.930843  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 18:37:33.938894  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 18:37:33.947415  140370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 18:37:33.955190  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 18:37:33.963495  140370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 18:37:33.969954  140370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 18:37:33.976395  140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:37:34.066470  140370 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 18:37:34.145520  140370 start.go:483] detecting cgroup driver to use...
	I0128 18:37:34.145571  140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 18:37:34.145629  140370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 18:37:34.155619  140370 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 18:37:34.155677  140370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 18:37:34.164697  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 18:37:34.179339  140370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 18:37:34.287521  140370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 18:37:34.395438  140370 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 18:37:34.395467  140370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 18:37:34.408864  140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:37:34.487254  140370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 18:37:34.716580  140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 18:37:34.796937  140370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 18:37:34.876051  140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 18:37:34.951893  140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:37:35.035868  140370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 18:37:35.052109  140370 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 18:37:35.052172  140370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 18:37:35.055415  140370 start.go:551] Will wait 60s for crictl version
	I0128 18:37:35.055467  140370 ssh_runner.go:195] Run: which crictl
	I0128 18:37:35.058181  140370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 18:37:35.135807  140370 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 18:37:35.135864  140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 18:37:35.161958  140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 18:37:35.193909  140370 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 18:37:35.194009  140370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:37:35.291519  140370 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-28 18:37:35.214249226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:37:35.291637  140370 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 18:37:35.313607  140370 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0128 18:37:35.317298  140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 18:37:35.326968  140370 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.4
	I0128 18:37:35.327018  140370 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:37:35.327144  140370 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
	I0128 18:37:35.327197  140370 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
	I0128 18:37:35.327263  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
	W0128 18:37:35.327288  140370 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
	I0128 18:37:35.327300  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 18:37:35.327326  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
	I0128 18:37:35.327349  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
	I0128 18:37:35.327368  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
	I0128 18:37:35.327402  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
	I0128 18:37:35.327967  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 18:37:35.345516  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 18:37:35.363369  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 18:37:35.380552  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 18:37:35.397674  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
	I0128 18:37:35.416171  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
	I0128 18:37:35.435443  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 18:37:35.452809  140370 ssh_runner.go:195] Run: openssl version
	I0128 18:37:35.457757  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
	I0128 18:37:35.465226  140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
	I0128 18:37:35.468203  140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
	I0128 18:37:35.468250  140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
	I0128 18:37:35.472911  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
	I0128 18:37:35.479788  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
	I0128 18:37:35.487495  140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
	I0128 18:37:35.491144  140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
	I0128 18:37:35.491199  140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
	I0128 18:37:35.496365  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 18:37:35.503586  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 18:37:35.511636  140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:37:35.515214  140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:37:35.515271  140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:37:35.520350  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 18:37:35.527590  140370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 18:37:35.595260  140370 cni.go:84] Creating CNI manager for ""
	I0128 18:37:35.595281  140370 cni.go:136] 3 nodes found, recommending kindnet
	I0128 18:37:35.595290  140370 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 18:37:35.595309  140370 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 18:37:35.595443  140370 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-052675-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 18:37:35.595530  140370 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 18:37:35.595574  140370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 18:37:35.604342  140370 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 18:37:35.604401  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0128 18:37:35.610782  140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0128 18:37:35.623062  140370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 18:37:35.635745  140370 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0128 18:37:35.638800  140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 18:37:35.647978  140370 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:37:35.648027  140370 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0128 18:37:35.648143  140370 addons.go:65] Setting storage-provisioner=true in profile "multinode-052675"
	I0128 18:37:35.648165  140370 addons.go:227] Setting addon storage-provisioner=true in "multinode-052675"
	I0128 18:37:35.648166  140370 addons.go:65] Setting default-storageclass=true in profile "multinode-052675"
	I0128 18:37:35.648194  140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:37:35.648219  140370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-052675"
	W0128 18:37:35.648174  140370 addons.go:236] addon storage-provisioner should already be in state true
	I0128 18:37:35.648229  140370 start.go:299] JoinCluster: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 18:37:35.648342  140370 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:37:35.648354  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0128 18:37:35.648403  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:37:35.648554  140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:37:35.648785  140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:37:35.678753  140370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 18:37:35.677915  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:37:35.680756  140370 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 18:37:35.680780  140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 18:37:35.680841  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:37:35.695309  140370 addons.go:227] Setting addon default-storageclass=true in "multinode-052675"
	W0128 18:37:35.695331  140370 addons.go:236] addon default-storageclass should already be in state true
	I0128 18:37:35.695353  140370 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:37:35.695742  140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:37:35.710623  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:37:35.723088  140370 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 18:37:35.723114  140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 18:37:35.723171  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:37:35.749745  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:37:35.818947  140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 18:37:35.833740  140370 start.go:312] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:37:35.833792  140370 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:37:35.834086  140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-052675-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0128 18:37:35.834128  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:37:35.855911  140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 18:37:35.866405  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:37:36.216130  140370 node.go:109] successfully drained node "m03"
	I0128 18:37:36.218504  140370 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0128 18:37:36.220255  140370 addons.go:492] enable addons completed in 572.236601ms: enabled=[storage-provisioner default-storageclass]
	I0128 18:37:36.220401  140370 node.go:125] successfully deleted node "m03"
	I0128 18:37:36.220416  140370 start.go:316] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:37:36.220437  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:37:36.220487  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
	E0128 18:37:36.384147  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:37:36.255380    1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:37:36.384173  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
	I0128 18:37:36.384189  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0128 18:37:36.422394  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:37:36.422421  140370 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:37:36.255380    1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:37:47.470499  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:37:47.470589  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
	E0128 18:37:47.620550  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:37:47.507087    1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:37:47.620577  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
	I0128 18:37:47.620591  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0128 18:37:47.656587  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:37:47.656613  140370 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:37:47.507087    1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:38:09.265264  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:38:09.265318  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
	E0128 18:38:09.421323  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:38:09.302407    2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:38:09.421352  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
	I0128 18:38:09.421365  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0128 18:38:09.458262  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:38:09.458304  140370 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:38:09.302407    2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:38:35.661652  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:38:35.661716  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
	E0128 18:38:35.817509  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:38:35.697493    2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:38:35.817536  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
	I0128 18:38:35.817547  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0128 18:38:35.855576  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:38:35.855612  140370 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:38:35.697493    2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:39:07.504180  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:39:07.504247  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
	E0128 18:39:07.655353  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:39:07.539815    2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:39:07.655375  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
	I0128 18:39:07.655389  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0128 18:39:07.694454  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:39:07.694486  140370 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:39:07.539815    2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:39:54.504816  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0128 18:39:54.504888  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
	E0128 18:39:54.657786  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:39:54.540796    3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:39:54.657811  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
	I0128 18:39:54.657827  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0128 18:39:54.694526  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0128 18:39:54.694560  140370 start.go:301] JoinCluster complete in 2m19.046332183s
	I0128 18:39:54.697658  140370 out.go:177] 
	W0128 18:39:54.699334  140370 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:39:54.540796    3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0128 18:39:54.540796    3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 18:39:54.699351  140370 out.go:239] * 
	* 
	W0128 18:39:54.701288  140370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 18:39:54.703217  140370 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:254: I0128 18:37:28.768159  140370 out.go:296] Setting OutFile to fd 1 ...
I0128 18:37:28.768346  140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:37:28.768358  140370 out.go:309] Setting ErrFile to fd 2...
I0128 18:37:28.768363  140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:37:28.768525  140370 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
I0128 18:37:28.768811  140370 mustload.go:65] Loading cluster: multinode-052675
I0128 18:37:28.769117  140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:28.769503  140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
W0128 18:37:28.792776  140370 host.go:58] "multinode-052675-m03" host status: Stopped
I0128 18:37:28.795620  140370 out.go:177] * Starting worker node multinode-052675-m03 in cluster multinode-052675
I0128 18:37:28.797097  140370 cache.go:120] Beginning downloading kic base image for docker with docker
I0128 18:37:28.798505  140370 out.go:177] * Pulling base image ...
I0128 18:37:28.799824  140370 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:37:28.799871  140370 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0128 18:37:28.799886  140370 cache.go:57] Caching tarball of preloaded images
I0128 18:37:28.799923  140370 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0128 18:37:28.799986  140370 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0128 18:37:28.799998  140370 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
I0128 18:37:28.800148  140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:37:28.823561  140370 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
I0128 18:37:28.823583  140370 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
I0128 18:37:28.823602  140370 cache.go:193] Successfully downloaded all kic artifacts
I0128 18:37:28.823636  140370 start.go:364] acquiring machines lock for multinode-052675-m03: {Name:mk417407859367a958d60a86e439689c454fd088 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0128 18:37:28.823725  140370 start.go:368] acquired machines lock for "multinode-052675-m03" in 40.859µs
I0128 18:37:28.823755  140370 start.go:96] Skipping create...Using existing machine configuration
I0128 18:37:28.823765  140370 fix.go:55] fixHost starting: m03
I0128 18:37:28.823991  140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
I0128 18:37:28.851728  140370 fix.go:103] recreateIfNeeded on multinode-052675-m03: state=Stopped err=<nil>
W0128 18:37:28.851772  140370 fix.go:129] unexpected machine state, will restart: <nil>
I0128 18:37:28.854195  140370 out.go:177] * Restarting existing docker container for "multinode-052675-m03" ...
I0128 18:37:28.855947  140370 cli_runner.go:164] Run: docker start multinode-052675-m03
I0128 18:37:29.217988  140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
I0128 18:37:29.243480  140370 kic.go:426] container "multinode-052675-m03" state is running.
I0128 18:37:29.243903  140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:29.270969  140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:37:29.271203  140370 machine.go:88] provisioning docker machine ...
I0128 18:37:29.271232  140370 ubuntu.go:169] provisioning hostname "multinode-052675-m03"
I0128 18:37:29.271277  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:29.294440  140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:29.294650  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:29.294672  140370 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-052675-m03 && echo "multinode-052675-m03" | sudo tee /etc/hostname
I0128 18:37:29.295321  140370 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41948->127.0.0.1:32867: read: connection reset by peer
I0128 18:37:32.436714  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675-m03

                                                
                                                
I0128 18:37:32.436792  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.460540  140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:32.460694  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:32.460715  140370 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-052675-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-052675-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0128 18:37:32.592073  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0128 18:37:32.592124  140370 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
I0128 18:37:32.592149  140370 ubuntu.go:177] setting up certificates
I0128 18:37:32.592156  140370 provision.go:83] configureAuth start
I0128 18:37:32.592205  140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:32.615257  140370 provision.go:138] copyHostCerts
I0128 18:37:32.615326  140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
I0128 18:37:32.615335  140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
I0128 18:37:32.615398  140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
I0128 18:37:32.615486  140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
I0128 18:37:32.615498  140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
I0128 18:37:32.615524  140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
I0128 18:37:32.615567  140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
I0128 18:37:32.615575  140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
I0128 18:37:32.615594  140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
I0128 18:37:32.615630  140370 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675-m03]
I0128 18:37:32.730355  140370 provision.go:172] copyRemoteCerts
I0128 18:37:32.730428  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0128 18:37:32.730461  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.755868  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:32.848031  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0128 18:37:32.867603  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0128 18:37:32.885889  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0128 18:37:32.904961  140370 provision.go:86] duration metric: configureAuth took 312.790194ms
I0128 18:37:32.904990  140370 ubuntu.go:193] setting minikube options for container-runtime
I0128 18:37:32.905181  140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:32.905241  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.930266  140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:32.930415  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:32.930429  140370 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0128 18:37:33.061366  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay

                                                
                                                
I0128 18:37:33.061402  140370 ubuntu.go:71] root file system type: overlay
I0128 18:37:33.061606  140370 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0128 18:37:33.061688  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.087541  140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:33.087719  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:33.087814  140370 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0128 18:37:33.230445  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0128 18:37:33.230514  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.256238  140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:33.256411  140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:33.256474  140370 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0128 18:37:33.392286  140370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0128 18:37:33.392317  140370 machine.go:91] provisioned docker machine in 4.121098442s
I0128 18:37:33.392328  140370 start.go:300] post-start starting for "multinode-052675-m03" (driver="docker")
I0128 18:37:33.392335  140370 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0128 18:37:33.392399  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0128 18:37:33.392436  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.418787  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.512281  140370 ssh_runner.go:195] Run: cat /etc/os-release
I0128 18:37:33.514993  140370 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0128 18:37:33.515021  140370 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0128 18:37:33.515039  140370 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0128 18:37:33.515047  140370 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0128 18:37:33.515065  140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
I0128 18:37:33.515125  140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
I0128 18:37:33.515207  140370 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
I0128 18:37:33.515300  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0128 18:37:33.522350  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:37:33.541225  140370 start.go:303] post-start completed in 148.881332ms
I0128 18:37:33.541302  140370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0128 18:37:33.541341  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.567063  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.656837  140370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0128 18:37:33.660572  140370 fix.go:57] fixHost completed within 4.836799887s
I0128 18:37:33.660596  140370 start.go:83] releasing machines lock for "multinode-052675-m03", held for 4.836857796s
I0128 18:37:33.660659  140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:33.683972  140370 ssh_runner.go:195] Run: systemctl --version
I0128 18:37:33.684004  140370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0128 18:37:33.684023  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.684051  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.710480  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.711899  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.800897  140370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0128 18:37:33.836566  140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0128 18:37:33.853295  140370 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0128 18:37:33.853399  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0128 18:37:33.860387  140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0128 18:37:33.874296  140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0128 18:37:33.881997  140370 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0128 18:37:33.882027  140370 start.go:483] detecting cgroup driver to use...
I0128 18:37:33.882056  140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:37:33.882204  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:37:33.895781  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0128 18:37:33.904320  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0128 18:37:33.912939  140370 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0128 18:37:33.912987  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0128 18:37:33.922779  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:37:33.930843  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0128 18:37:33.938894  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:37:33.947415  140370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0128 18:37:33.955190  140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0128 18:37:33.963495  140370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0128 18:37:33.969954  140370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0128 18:37:33.976395  140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:34.066470  140370 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0128 18:37:34.145520  140370 start.go:483] detecting cgroup driver to use...
I0128 18:37:34.145571  140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:37:34.145629  140370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0128 18:37:34.155619  140370 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0128 18:37:34.155677  140370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0128 18:37:34.164697  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:37:34.179339  140370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0128 18:37:34.287521  140370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0128 18:37:34.395438  140370 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0128 18:37:34.395467  140370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0128 18:37:34.408864  140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:34.487254  140370 ssh_runner.go:195] Run: sudo systemctl restart docker
I0128 18:37:34.716580  140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:37:34.796937  140370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0128 18:37:34.876051  140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:37:34.951893  140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:35.035868  140370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0128 18:37:35.052109  140370 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0128 18:37:35.052172  140370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0128 18:37:35.055415  140370 start.go:551] Will wait 60s for crictl version
I0128 18:37:35.055467  140370 ssh_runner.go:195] Run: which crictl
I0128 18:37:35.058181  140370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0128 18:37:35.135807  140370 start.go:567] Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  20.10.23
RuntimeApiVersion:  v1alpha2
I0128 18:37:35.135864  140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:37:35.161958  140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:37:35.193909  140370 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0128 18:37:35.194009  140370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0128 18:37:35.291519  140370 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-28 18:37:35.214249226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0128 18:37:35.291637  140370 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0128 18:37:35.313607  140370 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
I0128 18:37:35.317298  140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:37:35.326968  140370 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.4
I0128 18:37:35.327018  140370 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:37:35.327144  140370 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
I0128 18:37:35.327197  140370 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
I0128 18:37:35.327263  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
W0128 18:37:35.327288  140370 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
I0128 18:37:35.327300  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
I0128 18:37:35.327326  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
I0128 18:37:35.327349  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
I0128 18:37:35.327368  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
I0128 18:37:35.327402  140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:37:35.327967  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0128 18:37:35.345516  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0128 18:37:35.363369  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0128 18:37:35.380552  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0128 18:37:35.397674  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
I0128 18:37:35.416171  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
I0128 18:37:35.435443  140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0128 18:37:35.452809  140370 ssh_runner.go:195] Run: openssl version
I0128 18:37:35.457757  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
I0128 18:37:35.465226  140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
I0128 18:37:35.468203  140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
I0128 18:37:35.468250  140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
I0128 18:37:35.472911  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
I0128 18:37:35.479788  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
I0128 18:37:35.487495  140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
I0128 18:37:35.491144  140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
I0128 18:37:35.491199  140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
I0128 18:37:35.496365  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
I0128 18:37:35.503586  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0128 18:37:35.511636  140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.515214  140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.515271  140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.520350  140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0128 18:37:35.527590  140370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0128 18:37:35.595260  140370 cni.go:84] Creating CNI manager for ""
I0128 18:37:35.595281  140370 cni.go:136] 3 nodes found, recommending kindnet
I0128 18:37:35.595290  140370 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0128 18:37:35.595309  140370 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0128 18:37:35.595443  140370 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-052675-m03"
kubeletExtraArgs:
node-ip: 192.168.58.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s

                                                
                                                
I0128 18:37:35.595530  140370 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0128 18:37:35.595574  140370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0128 18:37:35.604342  140370 binaries.go:44] Found k8s binaries, skipping transfer
I0128 18:37:35.604401  140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0128 18:37:35.610782  140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0128 18:37:35.623062  140370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0128 18:37:35.635745  140370 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
I0128 18:37:35.638800  140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:37:35.647978  140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.648027  140370 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0128 18:37:35.648143  140370 addons.go:65] Setting storage-provisioner=true in profile "multinode-052675"
I0128 18:37:35.648165  140370 addons.go:227] Setting addon storage-provisioner=true in "multinode-052675"
I0128 18:37:35.648166  140370 addons.go:65] Setting default-storageclass=true in profile "multinode-052675"
I0128 18:37:35.648194  140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:35.648219  140370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-052675"
W0128 18:37:35.648174  140370 addons.go:236] addon storage-provisioner should already be in state true
I0128 18:37:35.648229  140370 start.go:299] JoinCluster: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 18:37:35.648342  140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.648354  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0128 18:37:35.648403  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.648554  140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.648785  140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.678753  140370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0128 18:37:35.677915  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.680756  140370 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:37:35.680780  140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0128 18:37:35.680841  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.695309  140370 addons.go:227] Setting addon default-storageclass=true in "multinode-052675"
W0128 18:37:35.695331  140370 addons.go:236] addon default-storageclass should already be in state true
I0128 18:37:35.695353  140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.695742  140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.710623  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.723088  140370 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0128 18:37:35.723114  140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0128 18:37:35.723171  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.749745  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.818947  140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:37:35.833740  140370 start.go:312] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:35.833792  140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.834086  140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-052675-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0128 18:37:35.834128  140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.855911  140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0128 18:37:35.866405  140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:36.216130  140370 node.go:109] successfully drained node "m03"
I0128 18:37:36.218504  140370 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0128 18:37:36.220255  140370 addons.go:492] enable addons completed in 572.236601ms: enabled=[storage-provisioner default-storageclass]
I0128 18:37:36.220401  140370 node.go:125] successfully deleted node "m03"
I0128 18:37:36.220416  140370 start.go:316] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:36.220437  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:36.220487  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:37:36.384147  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:37:36.255380    1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:36.384173  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:37:36.384189  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:37:36.422394  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:36.422421  140370 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:37:36.255380    1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.470499  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:47.470589  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:37:47.620550  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:37:47.507087    1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.620577  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:37:47.620591  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:37:47.656587  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.656613  140370 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:37:47.507087    1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.265264  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:38:09.265318  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:38:09.421323  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:38:09.302407    2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.421352  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:38:09.421365  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:38:09.458262  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.458304  140370 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:38:09.302407    2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.661652  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:38:35.661716  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:38:35.817509  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:38:35.697493    2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.817536  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:38:35.817547  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:38:35.855576  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.855612  140370 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:38:35.697493    2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.504180  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:39:07.504247  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:39:07.655353  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:39:07.539815    2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.655375  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:39:07.655389  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:39:07.694454  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.694486  140370 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:39:07.539815    2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.504816  140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:39:54.504888  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:39:54.657786  140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:39:54.540796    3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.657811  140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:39:54.657827  140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:39:54.694526  140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.694560  140370 start.go:301] JoinCluster complete in 2m19.046332183s
I0128 18:39:54.697658  140370 out.go:177] 
W0128 18:39:54.699334  140370 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:39:54.540796    3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0128 18:39:54.540796    3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
W0128 18:39:54.699351  140370 out.go:239] * 
* 
W0128 18:39:54.701288  140370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0128 18:39:54.703217  140370 out.go:177] 
multinode_test.go:255: node start returned an error. args "out/minikube-linux-amd64 -p multinode-052675 node start m03 --alsologtostderr": exit status 80
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-052675
helpers_test.go:235: (dbg) docker inspect multinode-052675:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6",
	        "Created": "2023-01-28T18:35:44.778914376Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 122257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:35:45.14907195Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/hosts",
	        "LogPath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6-json.log",
	        "Name": "/multinode-052675",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-052675:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-052675",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5-init/diff:/var/lib/docker/overlay2/db391ee9d0a42f7dc5df56df5db62b059d8e193980adf15a88c06e73cfc1e11a/diff:/var/lib/docker/overlay2/e6a847e0ebf9467b2ce5842728c2091e03878a25d813278a725211251a8a0eae/diff:/var/lib/docker/overlay2/32b8245ada3251dc013f140506c5240693363e8c2c9707bb1f2bd97a299c1c9c/diff:/var/lib/docker/overlay2/b82b7f6425d78cea023899c86c4008c827442cea441cb667b37154bbc2d24d2a/diff:/var/lib/docker/overlay2/c46a484250fda920ad973923a47eec6875fb83c5c8ffe4014447a7388adfa158/diff:/var/lib/docker/overlay2/4fd484a57f89f1beb796ce3c7e4df2d30f538b8058da22375106e2a23238713b/diff:/var/lib/docker/overlay2/c69e17070e6c00742f533cdd19089ef2f300b9182f899365e138db9a76b96add/diff:/var/lib/docker/overlay2/a89cd341d5705704d306d02fd86e7ff2f35e0d9ed2e500ac4c92f559d7f9508c/diff:/var/lib/docker/overlay2/460f41c732ad36df327a55d31cece26dad7009e8668de7190d704b3b155d9da4/diff:/var/lib/docker/overlay2/d4f3b8
89378af2d93d8e76850ebeadbcf0c8e9306d6547fb27c0ebb4fed72f10/diff:/var/lib/docker/overlay2/ca8448ea6755a2c2089fa9b41e21d9b4e343d18e866ffdf4e6860c5f5a589253/diff:/var/lib/docker/overlay2/c24c620026d1ca52eb96ff56568a2dd6bc302ff4afa648f8aef8f10ed2ece07b/diff:/var/lib/docker/overlay2/8ac88d56c0f846c2cf3cac5a490d2fb5e20b27161cfd03efcef725215ae3b441/diff:/var/lib/docker/overlay2/0c1b370b7889964a315e82599d29af07536dc272e6858778fd37b33783ba23e8/diff:/var/lib/docker/overlay2/a67314cc1f9da41da9764c7e038fc2cf0f488077a02f985c55f3a98eedd674e0/diff:/var/lib/docker/overlay2/076f5646fa2e7d1a370061474f260221170e0360913a62e429e22b2064e424da/diff:/var/lib/docker/overlay2/47411db3bf4ad8949b8540ea70897d62aa890be3526965fea1dc8c204272c55f/diff:/var/lib/docker/overlay2/8e1e48bf4dc814cd33ebbc6c4a395f3a538f828c7fb0a89e284518636cba1eeb/diff:/var/lib/docker/overlay2/595065ee241a515f497552c7317fadeffa0059d878cbca549866fd353e111815/diff:/var/lib/docker/overlay2/67d36d8ba6c4af51e5fd4c0c2158a8b0a27ce4d12690a8c640df17a49c7d9943/diff:/var/lib/d
ocker/overlay2/d65e9183bc7192d5f984a03a3305bde47985b124f97845ca8aa69614b496f11e/diff:/var/lib/docker/overlay2/f077ef7e752361f549e2bcff923cd9656d9422659019f479d6f31e6aaf138f2d/diff:/var/lib/docker/overlay2/2c86b185414bf11369f21dc9b85f647448d3cb231a389150d886c71a0ca4b421/diff:/var/lib/docker/overlay2/a33763e169f5c1e558d5c22af762002faee9030c7345e94863fedad26dec97d9/diff:/var/lib/docker/overlay2/46f61207484849cc704271281accc52f51d5b60521321d23f35f81f9bb0e4a77/diff:/var/lib/docker/overlay2/95df6666d99483dc3a2800026c52e4748fefdbc9e2546bfd46466751d0d731a9/diff:/var/lib/docker/overlay2/a456a63f8e47b35152666b5bed78a907758cd348f3f920ffbb0d9311c9d279f9/diff:/var/lib/docker/overlay2/1c5e94ffa671b54b267cd527227dcfc39ed5bbab8e0fb6be2070ec431d507a0a/diff:/var/lib/docker/overlay2/8a3bd5d98c7659cf304678b6264744ec246cef9aee117fa1a540ff86a482ccc9/diff:/var/lib/docker/overlay2/9cad4076d4b4bbcef9e82592a57b400fe80d42ff1a19694877817439314cee0a/diff:/var/lib/docker/overlay2/7b472338287e29db62b353700eac813b73c885f86407cd11c41a1934299
e0863/diff:/var/lib/docker/overlay2/7354f50bc82cc9855195da76830d2458639d9e6287091849761c899619a2ac04/diff:/var/lib/docker/overlay2/8ab525fe3dfca3bc1d9268c9a3f943b207b867d96340df596abb469af4828ba6/diff:/var/lib/docker/overlay2/dffeea500d781c9d4c5cc65f1e1b6700cdb3a811012a3badaa2115639ffc0caf/diff:/var/lib/docker/overlay2/61a63133b63995518dd6210c5e78426401d4fc9f7d185b0aa89bbda3fc8c25b4/diff:/var/lib/docker/overlay2/e9e4eb2fce220904fdd41e59a5fa8987119588334028be497f829eef4be81f1c/diff:/var/lib/docker/overlay2/07a1057c0f65b9e87f72fa58023fbf90660450984d4fbc6f060ec063e9b08d45/diff:/var/lib/docker/overlay2/f2287ff314d014b75d8b3eb8116909dbed8fc8245f5279470da1b4ae6794135c/diff:/var/lib/docker/overlay2/b32153240a2094363680e20f20963644e66c17ce8ba073e6c2792e4b8a0b94e6/diff:/var/lib/docker/overlay2/bfaa3114ab06fc41c74eee314d6113b0126c1a54deea72eaeb994032c71a489a/diff:/var/lib/docker/overlay2/214d0a46ee53e937a5e0573eb214868d10db3a2af1260784531edbd04adcd3b9/diff:/var/lib/docker/overlay2/508066538d9756b99d4582d0658234a93882f9
326f08176861a8667ec795f2c2/diff:/var/lib/docker/overlay2/58e67638a3291768e9dbb2be901c6b5639959c7cc86f4e4bab8f2e639b50661c/diff:/var/lib/docker/overlay2/a4f5240c2f2f160632514b931abac3aed3b9488f5bc07990127c7e5c3e2fd9ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-052675",
	                "Source": "/var/lib/docker/volumes/multinode-052675/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-052675",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-052675",
	                "name.minikube.sigs.k8s.io": "multinode-052675",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5004b0f45c3461ea2e19628e37ba0c6ee2efb6fe7adbec99269076992ef3f002",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5004b0f45c34",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-052675": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "314f7839c3ce",
	                        "multinode-052675"
	                    ],
	                    "NetworkID": "2c5d882139a100a36fc7907b7c297037e49f8b96a91c4fd0ce3c1e2733608fac",
	                    "EndpointID": "01c0ded6d1575738961e5d1f4a19718c2ebbe1822adaac8b1a34e7271af18bba",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-052675 -n multinode-052675
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-052675 logs -n 25: (1.11647299s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-052675 cp multinode-052675:/home/docker/cp-test.txt                           | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m03:/home/docker/cp-test_multinode-052675_multinode-052675-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n multinode-052675-m03 sudo cat                                   | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | /home/docker/cp-test_multinode-052675_multinode-052675-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp testdata/cp-test.txt                                                | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1635582165/001/cp-test_multinode-052675-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675:/home/docker/cp-test_multinode-052675-m02_multinode-052675.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n multinode-052675 sudo cat                                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | /home/docker/cp-test_multinode-052675-m02_multinode-052675.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m03:/home/docker/cp-test_multinode-052675-m02_multinode-052675-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n multinode-052675-m03 sudo cat                                   | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | /home/docker/cp-test_multinode-052675-m02_multinode-052675-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp testdata/cp-test.txt                                                | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1635582165/001/cp-test_multinode-052675-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675:/home/docker/cp-test_multinode-052675-m03_multinode-052675.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n multinode-052675 sudo cat                                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | /home/docker/cp-test_multinode-052675-m03_multinode-052675.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt                       | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m02:/home/docker/cp-test_multinode-052675-m03_multinode-052675-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n                                                                 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | multinode-052675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-052675 ssh -n multinode-052675-m02 sudo cat                                   | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	|         | /home/docker/cp-test_multinode-052675-m03_multinode-052675-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-052675 node stop m03                                                          | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
	| node    | multinode-052675 node start                                                             | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC |                     |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 18:35:38
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 18:35:38.654513  121576 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:35:38.654636  121576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:35:38.654644  121576 out.go:309] Setting ErrFile to fd 2...
	I0128 18:35:38.654649  121576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:35:38.654765  121576 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	I0128 18:35:38.656127  121576 out.go:303] Setting JSON to false
	I0128 18:35:38.657615  121576 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1091,"bootTime":1674929848,"procs":570,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 18:35:38.657686  121576 start.go:135] virtualization: kvm guest
	I0128 18:35:38.660205  121576 out.go:177] * [multinode-052675] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 18:35:38.661669  121576 notify.go:220] Checking for updates...
	I0128 18:35:38.663047  121576 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 18:35:38.664647  121576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 18:35:38.666308  121576 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:35:38.667924  121576 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	I0128 18:35:38.670441  121576 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 18:35:38.671951  121576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 18:35:38.673782  121576 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 18:35:38.701187  121576 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0128 18:35:38.701290  121576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:35:38.798534  121576 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-28 18:35:38.721493359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:35:38.798632  121576 docker.go:282] overlay module found
	I0128 18:35:38.801215  121576 out.go:177] * Using the docker driver based on user configuration
	I0128 18:35:38.802908  121576 start.go:296] selected driver: docker
	I0128 18:35:38.802927  121576 start.go:857] validating driver "docker" against <nil>
	I0128 18:35:38.802939  121576 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 18:35:38.803684  121576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:35:38.901084  121576 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-28 18:35:38.824010094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:35:38.901218  121576 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 18:35:38.901395  121576 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 18:35:38.903806  121576 out.go:177] * Using Docker driver with root privileges
	I0128 18:35:38.905605  121576 cni.go:84] Creating CNI manager for ""
	I0128 18:35:38.905635  121576 cni.go:136] 0 nodes found, recommending kindnet
	I0128 18:35:38.905645  121576 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0128 18:35:38.905656  121576 start_flags.go:319] config:
	{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 18:35:38.907589  121576 out.go:177] * Starting control plane node multinode-052675 in cluster multinode-052675
	I0128 18:35:38.909578  121576 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 18:35:38.911485  121576 out.go:177] * Pulling base image ...
	I0128 18:35:38.913505  121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 18:35:38.913569  121576 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 18:35:38.913579  121576 cache.go:57] Caching tarball of preloaded images
	I0128 18:35:38.913628  121576 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 18:35:38.913656  121576 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 18:35:38.913665  121576 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 18:35:38.913987  121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
	I0128 18:35:38.914007  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json: {Name:mk32894770f2a18eadadbbeaddece988df6d749a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:38.936811  121576 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 18:35:38.936834  121576 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 18:35:38.936851  121576 cache.go:193] Successfully downloaded all kic artifacts
	I0128 18:35:38.936894  121576 start.go:364] acquiring machines lock for multinode-052675: {Name:mk85ebbdb31f233e850f6772b4e0f5a60ad37b83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 18:35:38.937019  121576 start.go:368] acquired machines lock for "multinode-052675" in 89.778µs
	I0128 18:35:38.937047  121576 start.go:93] Provisioning new machine with config: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 18:35:38.937141  121576 start.go:125] createHost starting for "" (driver="docker")
	I0128 18:35:38.940342  121576 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0128 18:35:38.940558  121576 start.go:159] libmachine.API.Create for "multinode-052675" (driver="docker")
	I0128 18:35:38.940581  121576 client.go:168] LocalClient.Create starting
	I0128 18:35:38.940662  121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem
	I0128 18:35:38.940692  121576 main.go:141] libmachine: Decoding PEM data...
	I0128 18:35:38.940709  121576 main.go:141] libmachine: Parsing certificate...
	I0128 18:35:38.940759  121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem
	I0128 18:35:38.940772  121576 main.go:141] libmachine: Decoding PEM data...
	I0128 18:35:38.940784  121576 main.go:141] libmachine: Parsing certificate...
	I0128 18:35:38.941094  121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 18:35:38.962469  121576 cli_runner.go:211] docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 18:35:38.962531  121576 network_create.go:281] running [docker network inspect multinode-052675] to gather additional debugging logs...
	I0128 18:35:38.962550  121576 cli_runner.go:164] Run: docker network inspect multinode-052675
	W0128 18:35:38.985242  121576 cli_runner.go:211] docker network inspect multinode-052675 returned with exit code 1
	I0128 18:35:38.985288  121576 network_create.go:284] error running [docker network inspect multinode-052675]: docker network inspect multinode-052675: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-052675
	I0128 18:35:38.985306  121576 network_create.go:286] output of [docker network inspect multinode-052675]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-052675
	
	** /stderr **
	I0128 18:35:38.985366  121576 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 18:35:39.007351  121576 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5bbc83fbc3cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:2b:8e:8b} reservation:<nil>}
	I0128 18:35:39.007838  121576 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e08de0}
	I0128 18:35:39.007867  121576 network_create.go:123] attempt to create docker network multinode-052675 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0128 18:35:39.007937  121576 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-052675 multinode-052675
	I0128 18:35:39.065261  121576 network_create.go:107] docker network multinode-052675 192.168.58.0/24 created
	I0128 18:35:39.065290  121576 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-052675" container
	I0128 18:35:39.065364  121576 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 18:35:39.086245  121576 cli_runner.go:164] Run: docker volume create multinode-052675 --label name.minikube.sigs.k8s.io=multinode-052675 --label created_by.minikube.sigs.k8s.io=true
	I0128 18:35:39.108252  121576 oci.go:103] Successfully created a docker volume multinode-052675
	I0128 18:35:39.108322  121576 cli_runner.go:164] Run: docker run --rm --name multinode-052675-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675 --entrypoint /usr/bin/test -v multinode-052675:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 18:35:39.664859  121576 oci.go:107] Successfully prepared a docker volume multinode-052675
	I0128 18:35:39.664901  121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 18:35:39.664923  121576 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 18:35:39.664995  121576 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 18:35:44.658237  121576 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (4.993156975s)
	I0128 18:35:44.658267  121576 kic.go:199] duration metric: took 4.993343 seconds to extract preloaded images to volume
	W0128 18:35:44.658434  121576 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0128 18:35:44.658558  121576 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 18:35:44.757525  121576 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-052675 --name multinode-052675 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-052675 --network multinode-052675 --ip 192.168.58.2 --volume multinode-052675:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 18:35:45.157931  121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Running}}
	I0128 18:35:45.183752  121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:35:45.208132  121576 cli_runner.go:164] Run: docker exec multinode-052675 stat /var/lib/dpkg/alternatives/iptables
	I0128 18:35:45.254420  121576 oci.go:144] the created container "multinode-052675" has a running status.
	I0128 18:35:45.254456  121576 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa...
	I0128 18:35:45.342834  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0128 18:35:45.342900  121576 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 18:35:45.406962  121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:35:45.430658  121576 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 18:35:45.430684  121576 kic_runner.go:114] Args: [docker exec --privileged multinode-052675 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 18:35:45.498575  121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:35:45.521124  121576 machine.go:88] provisioning docker machine ...
	I0128 18:35:45.521169  121576 ubuntu.go:169] provisioning hostname "multinode-052675"
	I0128 18:35:45.521226  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:45.548352  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:35:45.548618  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0128 18:35:45.548650  121576 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-052675 && echo "multinode-052675" | sudo tee /etc/hostname
	I0128 18:35:45.549274  121576 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32780->127.0.0.1:32852: read: connection reset by peer
	I0128 18:35:48.689431  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675
	
	I0128 18:35:48.689510  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:48.713097  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:35:48.713268  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0128 18:35:48.713286  121576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-052675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-052675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 18:35:48.844487  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 18:35:48.844514  121576 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
	I0128 18:35:48.844536  121576 ubuntu.go:177] setting up certificates
	I0128 18:35:48.844545  121576 provision.go:83] configureAuth start
	I0128 18:35:48.844597  121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675
	I0128 18:35:48.866662  121576 provision.go:138] copyHostCerts
	I0128 18:35:48.866696  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
	I0128 18:35:48.866728  121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
	I0128 18:35:48.866739  121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
	I0128 18:35:48.866810  121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
	I0128 18:35:48.866896  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
	I0128 18:35:48.866919  121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
	I0128 18:35:48.866927  121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
	I0128 18:35:48.866958  121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
	I0128 18:35:48.867014  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
	I0128 18:35:48.867034  121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
	I0128 18:35:48.867040  121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
	I0128 18:35:48.867071  121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
	I0128 18:35:48.867132  121576 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675]
	I0128 18:35:49.179482  121576 provision.go:172] copyRemoteCerts
	I0128 18:35:49.179549  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 18:35:49.179581  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:49.203934  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:35:49.295905  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0128 18:35:49.295979  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 18:35:49.313674  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0128 18:35:49.313728  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 18:35:49.330710  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0128 18:35:49.330760  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0128 18:35:49.347624  121576 provision.go:86] duration metric: configureAuth took 503.066444ms
	I0128 18:35:49.347651  121576 ubuntu.go:193] setting minikube options for container-runtime
	I0128 18:35:49.347805  121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:35:49.347850  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:49.370091  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:35:49.370273  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0128 18:35:49.370287  121576 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 18:35:49.500764  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 18:35:49.500789  121576 ubuntu.go:71] root file system type: overlay
	I0128 18:35:49.500982  121576 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 18:35:49.501044  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:49.525240  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:35:49.525391  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0128 18:35:49.525468  121576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 18:35:49.664945  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 18:35:49.665020  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:49.689142  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:35:49.689302  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0128 18:35:49.689385  121576 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 18:35:50.333243  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:35:49.659262819 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 18:35:50.333271  121576 machine.go:91] provisioned docker machine in 4.8121178s
	I0128 18:35:50.333289  121576 client.go:171] LocalClient.Create took 11.392703028s
	I0128 18:35:50.333301  121576 start.go:167] duration metric: libmachine.API.Create for "multinode-052675" took 11.392742716s
	I0128 18:35:50.333309  121576 start.go:300] post-start starting for "multinode-052675" (driver="docker")
	I0128 18:35:50.333316  121576 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 18:35:50.333377  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 18:35:50.333416  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:50.357009  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:35:50.452278  121576 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 18:35:50.454884  121576 command_runner.go:130] > NAME="Ubuntu"
	I0128 18:35:50.454899  121576 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0128 18:35:50.454903  121576 command_runner.go:130] > ID=ubuntu
	I0128 18:35:50.454908  121576 command_runner.go:130] > ID_LIKE=debian
	I0128 18:35:50.454913  121576 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0128 18:35:50.454917  121576 command_runner.go:130] > VERSION_ID="20.04"
	I0128 18:35:50.454922  121576 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0128 18:35:50.454929  121576 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0128 18:35:50.454937  121576 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0128 18:35:50.454952  121576 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0128 18:35:50.454962  121576 command_runner.go:130] > VERSION_CODENAME=focal
	I0128 18:35:50.454973  121576 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0128 18:35:50.455028  121576 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 18:35:50.455052  121576 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 18:35:50.455066  121576 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 18:35:50.455076  121576 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 18:35:50.455087  121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
	I0128 18:35:50.455134  121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
	I0128 18:35:50.455209  121576 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
	I0128 18:35:50.455219  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /etc/ssl/certs/103532.pem
	I0128 18:35:50.455302  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 18:35:50.462121  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
	I0128 18:35:50.479679  121576 start.go:303] post-start completed in 146.357687ms
	I0128 18:35:50.480033  121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675
	I0128 18:35:50.502489  121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
	I0128 18:35:50.502706  121576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 18:35:50.502742  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:50.524482  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:35:50.612747  121576 command_runner.go:130] > 16%!
	(MISSING)I0128 18:35:50.612820  121576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 18:35:50.616387  121576 command_runner.go:130] > 247G
	I0128 18:35:50.616425  121576 start.go:128] duration metric: createHost completed in 11.679275622s
	I0128 18:35:50.616435  121576 start.go:83] releasing machines lock for "multinode-052675", held for 11.679402154s
	I0128 18:35:50.616507  121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675
	I0128 18:35:50.639067  121576 ssh_runner.go:195] Run: cat /version.json
	I0128 18:35:50.639112  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:50.639125  121576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 18:35:50.639186  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:35:50.661638  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:35:50.662053  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:35:50.784721  121576 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0128 18:35:50.784814  121576 command_runner.go:130] > {"iso_version": "v1.29.0", "kicbase_version": "v0.0.37", "minikube_version": "v1.29.0", "commit": "69417d0c8c1a2f3e72a4e5999252066a50eceb1b"}
	I0128 18:35:50.784952  121576 ssh_runner.go:195] Run: systemctl --version
	I0128 18:35:50.788704  121576 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0128 18:35:50.788731  121576 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0128 18:35:50.788923  121576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 18:35:50.792422  121576 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0128 18:35:50.792459  121576 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0128 18:35:50.792470  121576 command_runner.go:130] > Device: 34h/52d	Inode: 568458      Links: 1
	I0128 18:35:50.792480  121576 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0128 18:35:50.792488  121576 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0128 18:35:50.792496  121576 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0128 18:35:50.792501  121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.814355792 +0000
	I0128 18:35:50.792507  121576 command_runner.go:130] >  Birth: -
	I0128 18:35:50.792693  121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 18:35:50.811627  121576 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 18:35:50.811743  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 18:35:50.818047  121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 18:35:50.830621  121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 18:35:50.848722  121576 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0128 18:35:50.848775  121576 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 18:35:50.848789  121576 start.go:483] detecting cgroup driver to use...
	I0128 18:35:50.848829  121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 18:35:50.848967  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 18:35:50.861347  121576 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0128 18:35:50.861375  121576 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0128 18:35:50.862091  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 18:35:50.870112  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 18:35:50.878302  121576 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 18:35:50.878350  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 18:35:50.886488  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 18:35:50.894963  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 18:35:50.903703  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 18:35:50.912977  121576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 18:35:50.921938  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 18:35:50.930610  121576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 18:35:50.937803  121576 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0128 18:35:50.937877  121576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 18:35:50.945261  121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:35:51.014930  121576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 18:35:51.095722  121576 start.go:483] detecting cgroup driver to use...
	I0128 18:35:51.095778  121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 18:35:51.095830  121576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 18:35:51.104894  121576 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0128 18:35:51.104939  121576 command_runner.go:130] > [Unit]
	I0128 18:35:51.104949  121576 command_runner.go:130] > Description=Docker Application Container Engine
	I0128 18:35:51.104957  121576 command_runner.go:130] > Documentation=https://docs.docker.com
	I0128 18:35:51.104969  121576 command_runner.go:130] > BindsTo=containerd.service
	I0128 18:35:51.104979  121576 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0128 18:35:51.104990  121576 command_runner.go:130] > Wants=network-online.target
	I0128 18:35:51.105000  121576 command_runner.go:130] > Requires=docker.socket
	I0128 18:35:51.105020  121576 command_runner.go:130] > StartLimitBurst=3
	I0128 18:35:51.105031  121576 command_runner.go:130] > StartLimitIntervalSec=60
	I0128 18:35:51.105040  121576 command_runner.go:130] > [Service]
	I0128 18:35:51.105047  121576 command_runner.go:130] > Type=notify
	I0128 18:35:51.105056  121576 command_runner.go:130] > Restart=on-failure
	I0128 18:35:51.105074  121576 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0128 18:35:51.105090  121576 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0128 18:35:51.105105  121576 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0128 18:35:51.105119  121576 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0128 18:35:51.105133  121576 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0128 18:35:51.105147  121576 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0128 18:35:51.105161  121576 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0128 18:35:51.105193  121576 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0128 18:35:51.105208  121576 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0128 18:35:51.105218  121576 command_runner.go:130] > ExecStart=
	I0128 18:35:51.105245  121576 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0128 18:35:51.105256  121576 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0128 18:35:51.105267  121576 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0128 18:35:51.105281  121576 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0128 18:35:51.105291  121576 command_runner.go:130] > LimitNOFILE=infinity
	I0128 18:35:51.105299  121576 command_runner.go:130] > LimitNPROC=infinity
	I0128 18:35:51.105307  121576 command_runner.go:130] > LimitCORE=infinity
	I0128 18:35:51.105316  121576 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0128 18:35:51.105328  121576 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0128 18:35:51.105338  121576 command_runner.go:130] > TasksMax=infinity
	I0128 18:35:51.105348  121576 command_runner.go:130] > TimeoutStartSec=0
	I0128 18:35:51.105359  121576 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0128 18:35:51.105369  121576 command_runner.go:130] > Delegate=yes
	I0128 18:35:51.105384  121576 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0128 18:35:51.105394  121576 command_runner.go:130] > KillMode=process
	I0128 18:35:51.105408  121576 command_runner.go:130] > [Install]
	I0128 18:35:51.105419  121576 command_runner.go:130] > WantedBy=multi-user.target
	I0128 18:35:51.105748  121576 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 18:35:51.105815  121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 18:35:51.115947  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 18:35:51.129293  121576 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0128 18:35:51.129331  121576 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0128 18:35:51.129386  121576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 18:35:51.215555  121576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 18:35:51.313329  121576 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 18:35:51.313359  121576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 18:35:51.326876  121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:35:51.412468  121576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 18:35:51.623365  121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 18:35:51.703684  121576 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0128 18:35:51.703760  121576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 18:35:51.784951  121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 18:35:51.860364  121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:35:51.935240  121576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 18:35:51.947654  121576 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 18:35:51.947725  121576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 18:35:51.950925  121576 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0128 18:35:51.950954  121576 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0128 18:35:51.950963  121576 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0128 18:35:51.950973  121576 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0128 18:35:51.950981  121576 command_runner.go:130] > Access: 2023-01-28 18:35:51.939485456 +0000
	I0128 18:35:51.950990  121576 command_runner.go:130] > Modify: 2023-01-28 18:35:51.939485456 +0000
	I0128 18:35:51.951001  121576 command_runner.go:130] > Change: 2023-01-28 18:35:51.943485846 +0000
	I0128 18:35:51.951011  121576 command_runner.go:130] >  Birth: -
	I0128 18:35:51.951032  121576 start.go:551] Will wait 60s for crictl version
	I0128 18:35:51.951072  121576 ssh_runner.go:195] Run: which crictl
	I0128 18:35:51.953864  121576 command_runner.go:130] > /usr/bin/crictl
	I0128 18:35:51.953985  121576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 18:35:52.048118  121576 command_runner.go:130] > Version:  0.1.0
	I0128 18:35:52.048152  121576 command_runner.go:130] > RuntimeName:  docker
	I0128 18:35:52.048161  121576 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0128 18:35:52.048170  121576 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0128 18:35:52.049799  121576 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 18:35:52.049855  121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 18:35:52.076368  121576 command_runner.go:130] > 20.10.23
	I0128 18:35:52.077580  121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 18:35:52.103757  121576 command_runner.go:130] > 20.10.23
	I0128 18:35:52.106824  121576 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 18:35:52.106909  121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 18:35:52.128260  121576 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0128 18:35:52.131415  121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 18:35:52.141376  121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 18:35:52.141440  121576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 18:35:52.163955  121576 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0128 18:35:52.163976  121576 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0128 18:35:52.163981  121576 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0128 18:35:52.163986  121576 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0128 18:35:52.163991  121576 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0128 18:35:52.163995  121576 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0128 18:35:52.163999  121576 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0128 18:35:52.164005  121576 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 18:35:52.164039  121576 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 18:35:52.164051  121576 docker.go:560] Images already preloaded, skipping extraction
	I0128 18:35:52.164099  121576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 18:35:52.184774  121576 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0128 18:35:52.184798  121576 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0128 18:35:52.184803  121576 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0128 18:35:52.184808  121576 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0128 18:35:52.184813  121576 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0128 18:35:52.184817  121576 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0128 18:35:52.184822  121576 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0128 18:35:52.184827  121576 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 18:35:52.185951  121576 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 18:35:52.185971  121576 cache_images.go:84] Images are preloaded, skipping loading
	I0128 18:35:52.186027  121576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 18:35:52.252127  121576 command_runner.go:130] > cgroupfs
	I0128 18:35:52.253494  121576 cni.go:84] Creating CNI manager for ""
	I0128 18:35:52.253514  121576 cni.go:136] 1 nodes found, recommending kindnet
	I0128 18:35:52.253524  121576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 18:35:52.253547  121576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 18:35:52.253696  121576 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-052675"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 18:35:52.253777  121576 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 18:35:52.253825  121576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 18:35:52.261118  121576 command_runner.go:130] > kubeadm
	I0128 18:35:52.261137  121576 command_runner.go:130] > kubectl
	I0128 18:35:52.261143  121576 command_runner.go:130] > kubelet
	I0128 18:35:52.261163  121576 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 18:35:52.261202  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 18:35:52.268092  121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0128 18:35:52.281332  121576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 18:35:52.295184  121576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0128 18:35:52.310668  121576 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0128 18:35:52.314358  121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 18:35:52.324913  121576 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.2
	I0128 18:35:52.324941  121576 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:52.325084  121576 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
	I0128 18:35:52.325256  121576 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
	I0128 18:35:52.325324  121576 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key
	I0128 18:35:52.325339  121576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt with IP's: []
	I0128 18:35:52.389976  121576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt ...
	I0128 18:35:52.390011  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt: {Name:mk6256c6f690324ccb025cd062c097c1548edb6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:52.390192  121576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key ...
	I0128 18:35:52.390204  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key: {Name:mkc615be81182c8600f095a5a9816bfa6149b5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:52.390277  121576 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041
	I0128 18:35:52.390291  121576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 18:35:52.635045  121576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041 ...
	I0128 18:35:52.635089  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041: {Name:mk4fb8d5a64eb7055553ed41478812e02920018d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:52.635275  121576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041 ...
	I0128 18:35:52.635288  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041: {Name:mkb1fe31e630d1e60dd66b937af776be924593f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:52.635357  121576 certs.go:333] copying /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt
	I0128 18:35:52.635431  121576 certs.go:337] copying /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key
	I0128 18:35:52.635478  121576 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key
	I0128 18:35:52.635491  121576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt with IP's: []
	I0128 18:35:52.728534  121576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt ...
	I0128 18:35:52.728567  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt: {Name:mkc80942dc65c06a9b7de9d77a7e11e3b4f4a219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:52.728726  121576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key ...
	I0128 18:35:52.728737  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key: {Name:mk74652b80055550da239fc7fbdd53f6c1af5c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:35:52.728799  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0128 18:35:52.728814  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0128 18:35:52.728822  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0128 18:35:52.728835  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0128 18:35:52.728847  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0128 18:35:52.728855  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0128 18:35:52.728865  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0128 18:35:52.728874  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0128 18:35:52.728920  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
	W0128 18:35:52.728952  121576 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
	I0128 18:35:52.728963  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 18:35:52.729028  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
	I0128 18:35:52.729054  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
	I0128 18:35:52.729075  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
	I0128 18:35:52.729110  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
	I0128 18:35:52.729144  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:35:52.729157  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem -> /usr/share/ca-certificates/10353.pem
	I0128 18:35:52.729169  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /usr/share/ca-certificates/103532.pem
	I0128 18:35:52.729692  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 18:35:52.747915  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 18:35:52.764814  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 18:35:52.781666  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 18:35:52.798841  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 18:35:52.817117  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 18:35:52.834165  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 18:35:52.851823  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 18:35:52.870078  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 18:35:52.887564  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
	I0128 18:35:52.905470  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
	I0128 18:35:52.923781  121576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 18:35:52.936202  121576 ssh_runner.go:195] Run: openssl version
	I0128 18:35:52.940671  121576 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0128 18:35:52.940812  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 18:35:52.947706  121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:35:52.950746  121576 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:35:52.950783  121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:35:52.950825  121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:35:52.955595  121576 command_runner.go:130] > b5213941
	I0128 18:35:52.955773  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 18:35:52.963292  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
	I0128 18:35:52.971323  121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
	I0128 18:35:52.974585  121576 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
	I0128 18:35:52.974634  121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
	I0128 18:35:52.974692  121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
	I0128 18:35:52.979450  121576 command_runner.go:130] > 51391683
	I0128 18:35:52.979664  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
	I0128 18:35:52.987069  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
	I0128 18:35:52.994920  121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
	I0128 18:35:52.998354  121576 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
	I0128 18:35:52.998393  121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
	I0128 18:35:52.998443  121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
	I0128 18:35:53.003371  121576 command_runner.go:130] > 3ec20f2e
	I0128 18:35:53.003436  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 18:35:53.011362  121576 kubeadm.go:401] StartCluster: {Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 18:35:53.011488  121576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 18:35:53.032670  121576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 18:35:53.039074  121576 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0128 18:35:53.039103  121576 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0128 18:35:53.039111  121576 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0128 18:35:53.039655  121576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 18:35:53.046335  121576 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 18:35:53.046389  121576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 18:35:53.053971  121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0128 18:35:53.054001  121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0128 18:35:53.054012  121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0128 18:35:53.054025  121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 18:35:53.054068  121576 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 18:35:53.054117  121576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 18:35:53.102905  121576 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0128 18:35:53.102926  121576 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0128 18:35:53.102985  121576 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 18:35:53.103000  121576 command_runner.go:130] > [preflight] Running pre-flight checks
	I0128 18:35:53.137013  121576 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0128 18:35:53.137044  121576 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0128 18:35:53.137102  121576 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1027-gcp
	I0128 18:35:53.137132  121576 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0128 18:35:53.137196  121576 kubeadm.go:322] OS: Linux
	I0128 18:35:53.137210  121576 command_runner.go:130] > OS: Linux
	I0128 18:35:53.137262  121576 kubeadm.go:322] CGROUPS_CPU: enabled
	I0128 18:35:53.137273  121576 command_runner.go:130] > CGROUPS_CPU: enabled
	I0128 18:35:53.137327  121576 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0128 18:35:53.137338  121576 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0128 18:35:53.137403  121576 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0128 18:35:53.137426  121576 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0128 18:35:53.137491  121576 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0128 18:35:53.137504  121576 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0128 18:35:53.137544  121576 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0128 18:35:53.137552  121576 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0128 18:35:53.137609  121576 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0128 18:35:53.137615  121576 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0128 18:35:53.137650  121576 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0128 18:35:53.137657  121576 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0128 18:35:53.137703  121576 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0128 18:35:53.137710  121576 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0128 18:35:53.137750  121576 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0128 18:35:53.137759  121576 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0128 18:35:53.203276  121576 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 18:35:53.203306  121576 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 18:35:53.203406  121576 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 18:35:53.203417  121576 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 18:35:53.203522  121576 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 18:35:53.203534  121576 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 18:35:53.332965  121576 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 18:35:53.337524  121576 out.go:204]   - Generating certificates and keys ...
	I0128 18:35:53.333027  121576 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 18:35:53.337694  121576 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 18:35:53.337731  121576 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0128 18:35:53.337808  121576 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 18:35:53.337818  121576 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0128 18:35:53.436658  121576 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 18:35:53.436700  121576 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 18:35:53.555142  121576 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 18:35:53.555193  121576 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0128 18:35:53.614846  121576 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 18:35:53.614867  121576 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0128 18:35:53.764645  121576 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 18:35:53.764664  121576 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0128 18:35:53.865611  121576 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 18:35:53.865638  121576 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0128 18:35:53.865792  121576 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0128 18:35:53.865804  121576 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0128 18:35:54.098510  121576 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 18:35:54.098541  121576 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0128 18:35:54.098638  121576 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0128 18:35:54.098666  121576 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0128 18:35:54.273367  121576 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 18:35:54.273400  121576 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 18:35:54.338008  121576 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 18:35:54.338047  121576 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 18:35:54.568654  121576 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 18:35:54.568684  121576 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0128 18:35:54.568754  121576 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 18:35:54.568766  121576 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 18:35:54.859049  121576 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 18:35:54.859082  121576 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 18:35:55.019596  121576 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 18:35:55.019624  121576 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 18:35:55.307403  121576 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 18:35:55.307436  121576 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 18:35:55.396701  121576 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 18:35:55.396736  121576 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 18:35:55.408922  121576 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 18:35:55.408952  121576 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 18:35:55.409634  121576 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 18:35:55.409653  121576 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 18:35:55.409684  121576 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0128 18:35:55.409694  121576 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0128 18:35:55.498346  121576 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 18:35:55.498376  121576 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 18:35:55.501290  121576 out.go:204]   - Booting up control plane ...
	I0128 18:35:55.501463  121576 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 18:35:55.501484  121576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 18:35:55.501682  121576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 18:35:55.501702  121576 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 18:35:55.503209  121576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 18:35:55.503229  121576 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 18:35:55.503919  121576 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 18:35:55.503937  121576 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 18:35:55.505659  121576 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 18:35:55.505687  121576 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 18:36:04.507982  121576 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002280 seconds
	I0128 18:36:04.508013  121576 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.002280 seconds
	I0128 18:36:04.508136  121576 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0128 18:36:04.508148  121576 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0128 18:36:04.522271  121576 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0128 18:36:04.522287  121576 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0128 18:36:05.044961  121576 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0128 18:36:05.044984  121576 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0128 18:36:05.045206  121576 kubeadm.go:322] [mark-control-plane] Marking the node multinode-052675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0128 18:36:05.045216  121576 command_runner.go:130] > [mark-control-plane] Marking the node multinode-052675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0128 18:36:05.554288  121576 kubeadm.go:322] [bootstrap-token] Using token: dmigo5.p3ot3922dtqo17e1
	I0128 18:36:05.554315  121576 command_runner.go:130] > [bootstrap-token] Using token: dmigo5.p3ot3922dtqo17e1
	I0128 18:36:05.556145  121576 out.go:204]   - Configuring RBAC rules ...
	I0128 18:36:05.556245  121576 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0128 18:36:05.556258  121576 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0128 18:36:05.558930  121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0128 18:36:05.558949  121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0128 18:36:05.565152  121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0128 18:36:05.565172  121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0128 18:36:05.567895  121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0128 18:36:05.567916  121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0128 18:36:05.571627  121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0128 18:36:05.571652  121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0128 18:36:05.573993  121576 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0128 18:36:05.574013  121576 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0128 18:36:05.583681  121576 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0128 18:36:05.583702  121576 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0128 18:36:05.785655  121576 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0128 18:36:05.785702  121576 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0128 18:36:05.977070  121576 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0128 18:36:05.977094  121576 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0128 18:36:05.978356  121576 kubeadm.go:322] 
	I0128 18:36:05.978435  121576 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0128 18:36:05.978448  121576 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0128 18:36:05.978457  121576 kubeadm.go:322] 
	I0128 18:36:05.978540  121576 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0128 18:36:05.978550  121576 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0128 18:36:05.978557  121576 kubeadm.go:322] 
	I0128 18:36:05.978584  121576 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0128 18:36:05.978593  121576 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0128 18:36:05.978669  121576 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0128 18:36:05.978678  121576 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0128 18:36:05.978794  121576 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0128 18:36:05.978815  121576 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0128 18:36:05.978823  121576 kubeadm.go:322] 
	I0128 18:36:05.978900  121576 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0128 18:36:05.978914  121576 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0128 18:36:05.978944  121576 kubeadm.go:322] 
	I0128 18:36:05.979011  121576 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0128 18:36:05.979020  121576 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0128 18:36:05.979025  121576 kubeadm.go:322] 
	I0128 18:36:05.979101  121576 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0128 18:36:05.979115  121576 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0128 18:36:05.979232  121576 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0128 18:36:05.979243  121576 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0128 18:36:05.979332  121576 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0128 18:36:05.979343  121576 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0128 18:36:05.979348  121576 kubeadm.go:322] 
	I0128 18:36:05.979504  121576 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0128 18:36:05.979524  121576 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0128 18:36:05.979629  121576 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0128 18:36:05.979647  121576 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0128 18:36:05.979674  121576 kubeadm.go:322] 
	I0128 18:36:05.979781  121576 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
	I0128 18:36:05.979798  121576 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
	I0128 18:36:05.979944  121576 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc \
	I0128 18:36:05.979964  121576 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc \
	I0128 18:36:05.980015  121576 kubeadm.go:322] 	--control-plane 
	I0128 18:36:05.980026  121576 command_runner.go:130] > 	--control-plane 
	I0128 18:36:05.980031  121576 kubeadm.go:322] 
	I0128 18:36:05.980141  121576 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0128 18:36:05.980153  121576 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0128 18:36:05.980163  121576 kubeadm.go:322] 
	I0128 18:36:05.980275  121576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
	I0128 18:36:05.980286  121576 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
	I0128 18:36:05.980414  121576 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc 
	I0128 18:36:05.980424  121576 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc 
	I0128 18:36:05.982695  121576 kubeadm.go:322] W0128 18:35:53.095074    1415 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0128 18:36:05.982718  121576 command_runner.go:130] ! W0128 18:35:53.095074    1415 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0128 18:36:05.983025  121576 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0128 18:36:05.983038  121576 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0128 18:36:05.983191  121576 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 18:36:05.983206  121576 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 18:36:05.983229  121576 cni.go:84] Creating CNI manager for ""
	I0128 18:36:05.983250  121576 cni.go:136] 1 nodes found, recommending kindnet
	I0128 18:36:05.985999  121576 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0128 18:36:05.987973  121576 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0128 18:36:05.992528  121576 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0128 18:36:05.992554  121576 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0128 18:36:05.992569  121576 command_runner.go:130] > Device: 34h/52d	Inode: 566552      Links: 1
	I0128 18:36:05.992580  121576 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0128 18:36:05.992588  121576 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0128 18:36:05.992595  121576 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0128 18:36:05.992609  121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.070283151 +0000
	I0128 18:36:05.992615  121576 command_runner.go:130] >  Birth: -
	I0128 18:36:05.993033  121576 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0128 18:36:05.993054  121576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0128 18:36:06.008945  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0128 18:36:06.830005  121576 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0128 18:36:06.834458  121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0128 18:36:06.843053  121576 command_runner.go:130] > serviceaccount/kindnet created
	I0128 18:36:06.851889  121576 command_runner.go:130] > daemonset.apps/kindnet created
	I0128 18:36:06.855359  121576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 18:36:06.855487  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=0b7a59349a2d83a39298292bdec73f3c39ac1090 minikube.k8s.io/name=multinode-052675 minikube.k8s.io/updated_at=2023_01_28T18_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:06.855493  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:06.862506  121576 command_runner.go:130] > -16
	I0128 18:36:06.862544  121576 ops.go:34] apiserver oom_adj: -16
	I0128 18:36:06.936227  121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0128 18:36:06.936325  121576 command_runner.go:130] > node/multinode-052675 labeled
	I0128 18:36:06.936329  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:07.021906  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:07.525260  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:07.585967  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:08.025624  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:08.088028  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:08.525449  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:08.588384  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:09.025163  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:09.089307  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:09.525671  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:09.586423  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:10.025589  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:10.086812  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:10.525471  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:10.591071  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:11.025662  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:11.089860  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:11.525539  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:11.591411  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:12.025705  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:12.089965  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:12.525267  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:12.587263  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:13.025436  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:13.090908  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:13.525460  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:13.588673  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:14.024714  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:14.085993  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:14.525437  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:14.587408  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:15.025473  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:15.092275  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:15.525703  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:15.587568  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:16.025664  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:16.090444  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:16.525697  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:16.594451  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:17.025282  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:17.093459  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:17.525689  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:17.591737  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:18.025417  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:18.090100  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:18.525259  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:18.597454  121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0128 18:36:19.025454  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0128 18:36:19.088025  121576 command_runner.go:130] > NAME      SECRETS   AGE
	I0128 18:36:19.088049  121576 command_runner.go:130] > default   0         1s
	I0128 18:36:19.090329  121576 kubeadm.go:1073] duration metric: took 12.234901667s to wait for elevateKubeSystemPrivileges.
	I0128 18:36:19.090354  121576 kubeadm.go:403] StartCluster complete in 26.079003926s
	I0128 18:36:19.090376  121576 settings.go:142] acquiring lock: {Name:mkdfcfb1354fd39bc122921aea86af6bfa22083f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:36:19.090447  121576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:36:19.091307  121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/kubeconfig: {Name:mkc492e51eda742b57c4c864f32d664b28db65ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:36:19.091555  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 18:36:19.091611  121576 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0128 18:36:19.091701  121576 addons.go:65] Setting storage-provisioner=true in profile "multinode-052675"
	I0128 18:36:19.091723  121576 addons.go:227] Setting addon storage-provisioner=true in "multinode-052675"
	I0128 18:36:19.091725  121576 addons.go:65] Setting default-storageclass=true in profile "multinode-052675"
	W0128 18:36:19.091731  121576 addons.go:236] addon storage-provisioner should already be in state true
	I0128 18:36:19.091745  121576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-052675"
	I0128 18:36:19.091766  121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:36:19.091776  121576 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:36:19.091902  121576 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:36:19.092149  121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:36:19.092327  121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:36:19.092250  121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 18:36:19.092874  121576 cert_rotation.go:137] Starting client certificate rotation controller
	I0128 18:36:19.093061  121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0128 18:36:19.093082  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:19.093094  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:19.093109  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:19.102612  121576 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0128 18:36:19.102636  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:19.102645  121576 round_trippers.go:580]     Audit-Id: 73b9783f-56c9-4068-85f6-6dc140b4a104
	I0128 18:36:19.102652  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:19.102660  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:19.102669  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:19.102676  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:19.102685  121576 round_trippers.go:580]     Content-Length: 291
	I0128 18:36:19.102699  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:19 GMT
	I0128 18:36:19.107168  121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"352","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0128 18:36:19.107691  121576 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"352","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0128 18:36:19.107753  121576 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0128 18:36:19.107760  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:19.107771  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:19.107781  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:19.107791  121576 round_trippers.go:473]     Content-Type: application/json
	I0128 18:36:19.114385  121576 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0128 18:36:19.114413  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:19.114424  121576 round_trippers.go:580]     Audit-Id: 8651b65a-9418-4cf7-ba32-e83bcb0fccec
	I0128 18:36:19.114434  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:19.114443  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:19.114453  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:19.114462  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:19.114471  121576 round_trippers.go:580]     Content-Length: 291
	I0128 18:36:19.114481  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:19 GMT
	I0128 18:36:19.114519  121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"354","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0128 18:36:19.131907  121576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 18:36:19.131049  121576 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:36:19.134001  121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 18:36:19.134359  121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0128 18:36:19.134370  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:19.134382  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:19.134390  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:19.134862  121576 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 18:36:19.134882  121576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 18:36:19.134938  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:36:19.140364  121576 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0128 18:36:19.140385  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:19.140392  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:19.140398  121576 round_trippers.go:580]     Content-Length: 109
	I0128 18:36:19.140403  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:19 GMT
	I0128 18:36:19.140409  121576 round_trippers.go:580]     Audit-Id: d1edbb57-7224-4f28-bd0e-d402d0d16315
	I0128 18:36:19.140414  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:19.140419  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:19.140425  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:19.140470  121576 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[]}
	I0128 18:36:19.140708  121576 addons.go:227] Setting addon default-storageclass=true in "multinode-052675"
	W0128 18:36:19.140725  121576 addons.go:236] addon default-storageclass should already be in state true
	I0128 18:36:19.140749  121576 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:36:19.141157  121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:36:19.161321  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:36:19.166045  121576 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 18:36:19.166070  121576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 18:36:19.166112  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:36:19.198255  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:36:19.212660  121576 command_runner.go:130] > apiVersion: v1
	I0128 18:36:19.212685  121576 command_runner.go:130] > data:
	I0128 18:36:19.212692  121576 command_runner.go:130] >   Corefile: |
	I0128 18:36:19.212698  121576 command_runner.go:130] >     .:53 {
	I0128 18:36:19.212704  121576 command_runner.go:130] >         errors
	I0128 18:36:19.212711  121576 command_runner.go:130] >         health {
	I0128 18:36:19.212719  121576 command_runner.go:130] >            lameduck 5s
	I0128 18:36:19.212724  121576 command_runner.go:130] >         }
	I0128 18:36:19.212731  121576 command_runner.go:130] >         ready
	I0128 18:36:19.212748  121576 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0128 18:36:19.212763  121576 command_runner.go:130] >            pods insecure
	I0128 18:36:19.212771  121576 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0128 18:36:19.212786  121576 command_runner.go:130] >            ttl 30
	I0128 18:36:19.212793  121576 command_runner.go:130] >         }
	I0128 18:36:19.212805  121576 command_runner.go:130] >         prometheus :9153
	I0128 18:36:19.212812  121576 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0128 18:36:19.212820  121576 command_runner.go:130] >            max_concurrent 1000
	I0128 18:36:19.212829  121576 command_runner.go:130] >         }
	I0128 18:36:19.212836  121576 command_runner.go:130] >         cache 30
	I0128 18:36:19.212851  121576 command_runner.go:130] >         loop
	I0128 18:36:19.212863  121576 command_runner.go:130] >         reload
	I0128 18:36:19.212870  121576 command_runner.go:130] >         loadbalance
	I0128 18:36:19.212881  121576 command_runner.go:130] >     }
	I0128 18:36:19.212895  121576 command_runner.go:130] > kind: ConfigMap
	I0128 18:36:19.212900  121576 command_runner.go:130] > metadata:
	I0128 18:36:19.212915  121576 command_runner.go:130] >   creationTimestamp: "2023-01-28T18:36:05Z"
	I0128 18:36:19.212920  121576 command_runner.go:130] >   name: coredns
	I0128 18:36:19.212927  121576 command_runner.go:130] >   namespace: kube-system
	I0128 18:36:19.212936  121576 command_runner.go:130] >   resourceVersion: "225"
	I0128 18:36:19.212942  121576 command_runner.go:130] >   uid: c7d533e7-b7aa-40ce-8e2b-4d63a9280357
	I0128 18:36:19.215863  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0128 18:36:19.389599  121576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 18:36:19.491522  121576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 18:36:19.615097  121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0128 18:36:19.615121  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:19.615133  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:19.615140  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:19.673165  121576 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
	I0128 18:36:19.673207  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:19.673223  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:19.673233  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:19.673241  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:19.673253  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:19.673261  121576 round_trippers.go:580]     Content-Length: 291
	I0128 18:36:19.673274  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:19 GMT
	I0128 18:36:19.673288  121576 round_trippers.go:580]     Audit-Id: 5342c519-c51a-4a2a-9190-7b1860ba99ed
	I0128 18:36:19.673574  121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"363","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0128 18:36:19.673700  121576 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-052675" context rescaled to 1 replicas
	I0128 18:36:19.673738  121576 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 18:36:19.676939  121576 out.go:177] * Verifying Kubernetes components...
	I0128 18:36:19.678806  121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 18:36:20.091068  121576 command_runner.go:130] > configmap/coredns replaced
	I0128 18:36:20.171480  121576 start.go:919] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0128 18:36:20.286525  121576 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0128 18:36:20.292836  121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0128 18:36:20.300664  121576 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0128 18:36:20.372491  121576 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0128 18:36:20.382122  121576 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0128 18:36:20.395945  121576 command_runner.go:130] > pod/storage-provisioner created
	I0128 18:36:20.473154  121576 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0128 18:36:20.473194  121576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.083565936s)
	I0128 18:36:20.479223  121576 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:36:20.479583  121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 18:36:20.482848  121576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0128 18:36:20.479935  121576 node_ready.go:35] waiting up to 6m0s for node "multinode-052675" to be "Ready" ...
	I0128 18:36:20.485256  121576 addons.go:492] enable addons completed in 1.393647901s: enabled=[storage-provisioner default-storageclass]
	I0128 18:36:20.485319  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:20.485329  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:20.485339  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:20.485348  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:20.487592  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:20.487620  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:20.487632  121576 round_trippers.go:580]     Audit-Id: 59af8370-55e0-4c55-a843-ca0685087a00
	I0128 18:36:20.487644  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:20.487653  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:20.487694  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:20.487708  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:20.487718  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:20 GMT
	I0128 18:36:20.487878  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:20.488625  121576 node_ready.go:49] node "multinode-052675" has status "Ready":"True"
	I0128 18:36:20.488648  121576 node_ready.go:38] duration metric: took 3.412086ms waiting for node "multinode-052675" to be "Ready" ...
	I0128 18:36:20.488660  121576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 18:36:20.488772  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0128 18:36:20.488797  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:20.488814  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:20.488830  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:20.494471  121576 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0128 18:36:20.494546  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:20.494652  121576 round_trippers.go:580]     Audit-Id: 14f43031-1869-4c44-bb62-da29f2e6b736
	I0128 18:36:20.494678  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:20.494698  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:20.494714  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:20.494729  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:20.494755  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:20 GMT
	I0128 18:36:20.495436  121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"380"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61789 chars]
	I0128 18:36:20.499384  121576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:20.499521  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:20.499545  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:20.499568  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:20.499586  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:20.501766  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:20.501830  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:20.501852  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:20.501871  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:20.501888  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:20.501904  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:20.501930  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:20 GMT
	I0128 18:36:20.501949  121576 round_trippers.go:580]     Audit-Id: de281ade-1d97-47da-8670-b13c796f312c
	I0128 18:36:20.502089  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:20.502614  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:20.502650  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:20.502670  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:20.502698  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:20.504336  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:20.504379  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:20.504400  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:20.504417  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:20.504432  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:20.504536  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:20.504560  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:20 GMT
	I0128 18:36:20.504576  121576 round_trippers.go:580]     Audit-Id: 904bdb55-4534-4774-9ecd-3abe1939b745
	I0128 18:36:20.505057  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:21.005826  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:21.005893  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:21.005908  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:21.005921  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:21.008729  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:21.008749  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:21.008756  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:21.008762  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:21 GMT
	I0128 18:36:21.008770  121576 round_trippers.go:580]     Audit-Id: 57996a8b-c1ca-4236-8781-9cf9133adb2c
	I0128 18:36:21.008780  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:21.008788  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:21.008800  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:21.008924  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:21.009488  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:21.009504  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:21.009515  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:21.009531  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:21.011543  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:21.011563  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:21.011572  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:21.011581  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:21 GMT
	I0128 18:36:21.011589  121576 round_trippers.go:580]     Audit-Id: 7944fca9-6c52-4acf-9ab3-2c9471fe708a
	I0128 18:36:21.011598  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:21.011612  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:21.011625  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:21.011773  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:21.506422  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:21.506447  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:21.506460  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:21.506470  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:21.508914  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:21.508939  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:21.508948  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:21.508956  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:21.508964  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:21 GMT
	I0128 18:36:21.508973  121576 round_trippers.go:580]     Audit-Id: b1ba3ea6-e7b4-45bd-b514-b9eaef7c0651
	I0128 18:36:21.508985  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:21.508995  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:21.509107  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:21.509667  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:21.509684  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:21.509694  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:21.509703  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:21.511825  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:21.511852  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:21.511862  121576 round_trippers.go:580]     Audit-Id: d177bbb4-e48e-48e3-8c77-f6b71df8af1b
	I0128 18:36:21.511871  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:21.511884  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:21.511892  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:21.511907  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:21.511924  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:21 GMT
	I0128 18:36:21.512045  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:22.006590  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:22.006613  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:22.006626  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:22.006637  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:22.009290  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:22.009322  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:22.009332  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:22.009340  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:22.009349  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:22.009362  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:22.009376  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:22 GMT
	I0128 18:36:22.009384  121576 round_trippers.go:580]     Audit-Id: 9b9ca020-818a-4534-b5a6-5ff4222bcb56
	I0128 18:36:22.009547  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:22.010154  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:22.010164  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:22.010172  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:22.010179  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:22.012180  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:22.012199  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:22.012209  121576 round_trippers.go:580]     Audit-Id: 09d1f9f8-c0e9-4f42-a841-0912451dfe30
	I0128 18:36:22.012217  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:22.012226  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:22.012239  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:22.012258  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:22.012269  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:22 GMT
	I0128 18:36:22.012420  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:22.505952  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:22.505972  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:22.505980  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:22.505986  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:22.508358  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:22.508381  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:22.508392  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:22.508401  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:22 GMT
	I0128 18:36:22.508410  121576 round_trippers.go:580]     Audit-Id: 237853df-f6ce-4a41-a0e0-b96d89f6beb8
	I0128 18:36:22.508421  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:22.508434  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:22.508466  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:22.508582  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:22.509046  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:22.509063  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:22.509077  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:22.509087  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:22.510774  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:22.510794  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:22.510801  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:22 GMT
	I0128 18:36:22.510807  121576 round_trippers.go:580]     Audit-Id: 4e71e57d-d0d7-4dd2-8d24-5a72ccf8e22d
	I0128 18:36:22.510812  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:22.510817  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:22.510821  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:22.510827  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:22.510965  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:22.511239  121576 pod_ready.go:102] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"False"
	I0128 18:36:23.006648  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:23.006669  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:23.006680  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:23.006688  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:23.009022  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:23.009041  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:23.009050  121576 round_trippers.go:580]     Audit-Id: 78466b3b-68c7-452c-a87a-a251ee64ab26
	I0128 18:36:23.009057  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:23.009062  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:23.009073  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:23.009080  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:23.009095  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:23 GMT
	I0128 18:36:23.009210  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:23.009758  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:23.009774  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:23.009786  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:23.009797  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:23.011660  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:23.011679  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:23.011689  121576 round_trippers.go:580]     Audit-Id: 1c35a27e-2e84-4b0b-bf58-5f325f1d0fea
	I0128 18:36:23.011697  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:23.011706  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:23.011716  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:23.011724  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:23.011735  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:23 GMT
	I0128 18:36:23.011870  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:23.506473  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:23.506493  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:23.506502  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:23.506509  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:23.508547  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:23.508573  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:23.508583  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:23.508588  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:23.508594  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:23 GMT
	I0128 18:36:23.508599  121576 round_trippers.go:580]     Audit-Id: 301a49be-703a-4dc7-883f-cc8a8af40039
	I0128 18:36:23.508604  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:23.508609  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:23.508706  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:23.509124  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:23.509137  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:23.509144  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:23.509152  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:23.510705  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:23.510722  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:23.510729  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:23 GMT
	I0128 18:36:23.510735  121576 round_trippers.go:580]     Audit-Id: f51559a8-6e53-4692-b4cb-9c608854cedb
	I0128 18:36:23.510740  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:23.510746  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:23.510751  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:23.510765  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:23.510855  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:24.006555  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:24.006577  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:24.006586  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:24.006592  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:24.008864  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:24.008901  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:24.008912  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:24 GMT
	I0128 18:36:24.008922  121576 round_trippers.go:580]     Audit-Id: a67d4579-ee6d-41a4-83f3-9e7ab7a7c91c
	I0128 18:36:24.008935  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:24.008940  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:24.008945  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:24.008953  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:24.009060  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:24.009492  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:24.009507  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:24.009514  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:24.009520  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:24.011222  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:24.011237  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:24.011253  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:24.011262  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:24.011274  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:24 GMT
	I0128 18:36:24.011285  121576 round_trippers.go:580]     Audit-Id: 2e409c88-9504-498f-957f-333576ddef2e
	I0128 18:36:24.011293  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:24.011311  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:24.011442  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:24.505933  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:24.505953  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:24.505961  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:24.505968  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:24.508179  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:24.508199  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:24.508206  121576 round_trippers.go:580]     Audit-Id: 2c465c20-9353-4c9d-bd27-8e35558c5c4c
	I0128 18:36:24.508211  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:24.508217  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:24.508222  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:24.508227  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:24.508232  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:24 GMT
	I0128 18:36:24.508320  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:24.508773  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:24.508787  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:24.508794  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:24.508800  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:24.510600  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:24.510623  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:24.510633  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:24 GMT
	I0128 18:36:24.510642  121576 round_trippers.go:580]     Audit-Id: 16b48d69-5451-4900-8f1f-45342a471b0d
	I0128 18:36:24.510649  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:24.510656  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:24.510665  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:24.510673  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:24.510848  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:25.006403  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:25.006423  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:25.006432  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:25.006438  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:25.008809  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:25.008833  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:25.008842  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:25.008850  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:25.008860  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:25.008868  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:25.008878  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:25 GMT
	I0128 18:36:25.008887  121576 round_trippers.go:580]     Audit-Id: 859cdb7b-1891-40d3-9007-39b8a2fdeac4
	I0128 18:36:25.008985  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:25.009446  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:25.009460  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:25.009469  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:25.009475  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:25.011141  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:25.011162  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:25.011173  121576 round_trippers.go:580]     Audit-Id: 83044aba-ec10-4d7f-9cb2-b3e08bb5b6b5
	I0128 18:36:25.011181  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:25.011190  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:25.011215  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:25.011228  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:25.011241  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:25 GMT
	I0128 18:36:25.011360  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:25.011650  121576 pod_ready.go:102] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"False"
	I0128 18:36:25.505953  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:25.505974  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:25.505984  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:25.505992  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:25.507810  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:25.507833  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:25.507843  121576 round_trippers.go:580]     Audit-Id: fb6fa7d0-139a-4b68-8dce-2cca21119354
	I0128 18:36:25.507852  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:25.507860  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:25.507868  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:25.507877  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:25.507888  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:25 GMT
	I0128 18:36:25.507973  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:25.508525  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:25.508540  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:25.508551  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:25.508561  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:25.510090  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:25.510111  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:25.510120  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:25.510128  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:25.510136  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:25.510153  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:25.510166  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:25 GMT
	I0128 18:36:25.510175  121576 round_trippers.go:580]     Audit-Id: ba35f305-b51d-4d97-b71d-db0b26104244
	I0128 18:36:25.510267  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:26.005850  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:26.005871  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:26.005879  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:26.005886  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:26.008372  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:26.008398  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:26.008409  121576 round_trippers.go:580]     Audit-Id: 1d2f296f-ae24-497c-9071-36a734b290ab
	I0128 18:36:26.008418  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:26.008427  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:26.008435  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:26.008468  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:26.008477  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:26 GMT
	I0128 18:36:26.008582  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:26.009144  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:26.009161  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:26.009172  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:26.009182  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:26.010999  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:26.011017  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:26.011026  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:26 GMT
	I0128 18:36:26.011035  121576 round_trippers.go:580]     Audit-Id: 7b888b56-f68f-441a-a79f-975f54fc887e
	I0128 18:36:26.011042  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:26.011051  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:26.011062  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:26.011071  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:26.011184  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:26.505770  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:26.505791  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:26.505799  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:26.505806  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:26.508062  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:26.508084  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:26.508094  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:26 GMT
	I0128 18:36:26.508103  121576 round_trippers.go:580]     Audit-Id: 9f5433c5-002d-4ee5-9446-a23c15744df8
	I0128 18:36:26.508116  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:26.508125  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:26.508133  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:26.508142  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:26.508259  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:26.508772  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:26.508787  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:26.508794  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:26.508801  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:26.510406  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:26.510422  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:26.510429  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:26.510435  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:26 GMT
	I0128 18:36:26.510439  121576 round_trippers.go:580]     Audit-Id: 85a016d6-a471-4d45-8ed2-d9ce82741cf0
	I0128 18:36:26.510444  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:26.510450  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:26.510455  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:26.510595  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:27.006282  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:27.006309  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:27.006321  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:27.006331  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:27.008631  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:27.008656  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:27.008665  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:27.008673  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:27 GMT
	I0128 18:36:27.008686  121576 round_trippers.go:580]     Audit-Id: 6bf7ed5a-6354-4190-9cd4-b7abc1f35099
	I0128 18:36:27.008695  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:27.008703  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:27.008714  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:27.008833  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:27.009277  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:27.009293  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:27.009303  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:27.009312  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:27.011332  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:27.011350  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:27.011357  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:27.011362  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:27 GMT
	I0128 18:36:27.011367  121576 round_trippers.go:580]     Audit-Id: a86c03c5-e667-441e-9285-e5562334b3f3
	I0128 18:36:27.011372  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:27.011378  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:27.011382  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:27.011474  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:27.011796  121576 pod_ready.go:102] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"False"
	I0128 18:36:27.506116  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:27.506140  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:27.506152  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:27.506159  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:27.508289  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:27.508311  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:27.508321  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:27 GMT
	I0128 18:36:27.508329  121576 round_trippers.go:580]     Audit-Id: 78c8c6e3-0b9a-4c24-8e6a-42871028ccf5
	I0128 18:36:27.508340  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:27.508349  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:27.508362  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:27.508373  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:27.508479  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:27.508959  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:27.508971  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:27.508978  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:27.508984  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:27.510675  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:27.510695  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:27.510704  121576 round_trippers.go:580]     Audit-Id: 58b48ffc-a8c8-4c3e-85ae-cad6dfb979f7
	I0128 18:36:27.510714  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:27.510721  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:27.510729  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:27.510737  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:27.510750  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:27 GMT
	I0128 18:36:27.510875  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:28.006498  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:28.006520  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:28.006528  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:28.006535  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:28.008923  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:28.008951  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:28.008962  121576 round_trippers.go:580]     Audit-Id: 738fd5ed-e1cd-439e-a03b-160681435fdc
	I0128 18:36:28.008971  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:28.008980  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:28.008988  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:28.008995  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:28.009000  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:28 GMT
	I0128 18:36:28.009116  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:28.009573  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:28.009587  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:28.009595  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:28.009601  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:28.011529  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:28.011556  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:28.011564  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:28 GMT
	I0128 18:36:28.011570  121576 round_trippers.go:580]     Audit-Id: efe64178-7bca-465a-bb4f-f16115865993
	I0128 18:36:28.011576  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:28.011582  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:28.011587  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:28.011595  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:28.011786  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:28.506317  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:28.506344  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:28.506353  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:28.506359  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:28.508828  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:28.508846  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:28.508853  121576 round_trippers.go:580]     Audit-Id: d2cc854c-9901-4df8-b06a-782e010d87d3
	I0128 18:36:28.508859  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:28.508867  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:28.508872  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:28.508878  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:28.508883  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:28 GMT
	I0128 18:36:28.508985  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0128 18:36:28.509466  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:28.509483  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:28.509490  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:28.509497  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:28.511720  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:28.511737  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:28.511744  121576 round_trippers.go:580]     Audit-Id: c954e0e9-1b66-415e-9b6a-dbe595fd5ec0
	I0128 18:36:28.511750  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:28.511762  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:28.511768  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:28.511773  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:28.511778  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:28 GMT
	I0128 18:36:28.511913  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:29.005869  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:29.005890  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.005899  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.005906  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.008438  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:29.008492  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.008503  121576 round_trippers.go:580]     Audit-Id: 49804a2f-46e7-43af-867e-2803cd5977e2
	I0128 18:36:29.008521  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.008531  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.008537  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.008543  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.008548  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.008636  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5942 chars]
	I0128 18:36:29.009187  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.009206  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.009216  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.009226  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.011163  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.011182  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.011188  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.011196  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.011207  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.011225  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.011238  121576 round_trippers.go:580]     Audit-Id: e7c16605-b219-4a9c-ba9d-5b64ed13cf65
	I0128 18:36:29.011250  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.011360  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:29.011717  121576 pod_ready.go:92] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:29.011739  121576 pod_ready.go:81] duration metric: took 8.512295381s waiting for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.011754  121576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.011833  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nzbz8
	I0128 18:36:29.011846  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.011854  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.011862  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.013527  121576 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0128 18:36:29.013557  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.013567  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.013577  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.013584  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.013593  121576 round_trippers.go:580]     Content-Length: 216
	I0128 18:36:29.013598  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.013606  121576 round_trippers.go:580]     Audit-Id: ce5a0787-cf79-48dc-a122-8dace6298f61
	I0128 18:36:29.013611  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.013632  121576 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-nzbz8\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-nzbz8","kind":"pods"},"code":404}
	I0128 18:36:29.013824  121576 pod_ready.go:97] error getting pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-nzbz8" not found
	I0128 18:36:29.013841  121576 pod_ready.go:81] duration metric: took 2.078165ms waiting for pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace to be "Ready" ...
	E0128 18:36:29.013853  121576 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-nzbz8" not found
	I0128 18:36:29.013863  121576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.013918  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-052675
	I0128 18:36:29.013928  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.013938  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.013953  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.015703  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.015726  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.015736  121576 round_trippers.go:580]     Audit-Id: 8316fddf-a2b6-42f8-9b16-5bb1626a02b7
	I0128 18:36:29.015745  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.015754  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.015763  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.015769  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.015775  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.015869  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-052675","namespace":"kube-system","uid":"cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d","resourceVersion":"261","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.mirror":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.seen":"2023-01-28T18:36:05.844239404Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0128 18:36:29.016252  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.016265  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.016272  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.016278  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.018058  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.018079  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.018088  121576 round_trippers.go:580]     Audit-Id: ab0a4211-b7c6-4a2d-822e-3d014c0640a7
	I0128 18:36:29.018095  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.018103  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.018111  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.018119  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.018130  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.018216  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:29.018477  121576 pod_ready.go:92] pod "etcd-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:29.018488  121576 pod_ready.go:81] duration metric: took 4.6155ms waiting for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.018500  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.018542  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-052675
	I0128 18:36:29.018549  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.018557  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.018563  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.020271  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.020288  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.020294  121576 round_trippers.go:580]     Audit-Id: cd2847b9-c39b-4d07-98d5-a1a37c5a86b1
	I0128 18:36:29.020299  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.020304  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.020309  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.020314  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.020320  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.020423  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-052675","namespace":"kube-system","uid":"c9b8edb5-77fc-4191-b470-8a73c76a3a73","resourceVersion":"291","creationTimestamp":"2023-01-28T18:36:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.mirror":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.seen":"2023-01-28T18:35:55.862480624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0128 18:36:29.020827  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.020841  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.020848  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.020855  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.022491  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.022518  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.022529  121576 round_trippers.go:580]     Audit-Id: 0c8c4e59-0305-496a-bcdb-8f4cc71feb5d
	I0128 18:36:29.022543  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.022553  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.022563  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.022576  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.022589  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.022672  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:29.022927  121576 pod_ready.go:92] pod "kube-apiserver-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:29.022940  121576 pod_ready.go:81] duration metric: took 4.433077ms waiting for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.022949  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.022995  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-052675
	I0128 18:36:29.023003  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.023010  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.023016  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.024647  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.024667  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.024676  121576 round_trippers.go:580]     Audit-Id: c24b7303-0555-451b-a87f-cc6c3e5fd2a1
	I0128 18:36:29.024685  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.024698  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.024710  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.024721  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.024731  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.024846  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-052675","namespace":"kube-system","uid":"6dd849f3-f4b3-4704-a3c5-671cb6a2350c","resourceVersion":"276","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.mirror":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.seen":"2023-01-28T18:36:05.844267614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0128 18:36:29.025242  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.025255  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.025265  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.025275  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.026855  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.026876  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.026886  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.026892  121576 round_trippers.go:580]     Audit-Id: b5811475-b4fb-4dce-b8f2-09d3bcc81b61
	I0128 18:36:29.026897  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.026903  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.026914  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.026922  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.027002  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:29.027288  121576 pod_ready.go:92] pod "kube-controller-manager-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:29.027299  121576 pod_ready.go:81] duration metric: took 4.344922ms waiting for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.027308  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.027345  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hz5nz
	I0128 18:36:29.027353  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.027359  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.027366  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.028780  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:29.028797  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.028806  121576 round_trippers.go:580]     Audit-Id: e62d4c8d-ca71-4b31-862f-d8f7ddd58f52
	I0128 18:36:29.028814  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.028822  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.028832  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.028845  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.028862  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.028940  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hz5nz","generateName":"kube-proxy-","namespace":"kube-system","uid":"85457440-94b9-4686-be3e-dc5b5cbc0fbb","resourceVersion":"390","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0128 18:36:29.206291  121576 request.go:622] Waited for 176.99155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.206357  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.206362  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.206369  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.206376  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.208602  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:29.208627  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.208638  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.208649  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.208657  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.208665  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.208681  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.208690  121576 round_trippers.go:580]     Audit-Id: 5c861d99-7560-4414-a819-82d1a0c8b1f8
	I0128 18:36:29.208796  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:29.209106  121576 pod_ready.go:92] pod "kube-proxy-hz5nz" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:29.209120  121576 pod_ready.go:81] duration metric: took 181.807231ms waiting for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.209128  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.406553  121576 request.go:622] Waited for 197.34467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
	I0128 18:36:29.406611  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
	I0128 18:36:29.406616  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.406624  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.406630  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.408808  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:29.408833  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.408843  121576 round_trippers.go:580]     Audit-Id: 94ddde5b-db8d-4b6c-ab1b-189ebad0d69d
	I0128 18:36:29.408853  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.408862  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.408871  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.408879  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.408892  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.409007  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-052675","namespace":"kube-system","uid":"b93c851a-ef3e-45a2-88b6-08bf615609f3","resourceVersion":"263","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.mirror":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.seen":"2023-01-28T18:36:05.844268554Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0128 18:36:29.606803  121576 request.go:622] Waited for 197.353306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.606851  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:29.606860  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.606868  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.606875  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.609109  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:29.609130  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.609137  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.609142  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.609150  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.609158  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.609166  121576 round_trippers.go:580]     Audit-Id: 0010eadb-122e-4380-a1e2-1d20d4646c71
	I0128 18:36:29.609173  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.609275  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
	I0128 18:36:29.609571  121576 pod_ready.go:92] pod "kube-scheduler-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:29.609584  121576 pod_ready.go:81] duration metric: took 400.450424ms waiting for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:29.609594  121576 pod_ready.go:38] duration metric: took 9.120885659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 18:36:29.609611  121576 api_server.go:51] waiting for apiserver process to appear ...
	I0128 18:36:29.609649  121576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 18:36:29.619199  121576 command_runner.go:130] > 2103
	I0128 18:36:29.619944  121576 api_server.go:71] duration metric: took 9.946165762s to wait for apiserver process to appear ...
	I0128 18:36:29.619966  121576 api_server.go:87] waiting for apiserver healthz status ...
	I0128 18:36:29.619979  121576 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0128 18:36:29.624209  121576 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0128 18:36:29.624264  121576 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0128 18:36:29.624275  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.624287  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.624301  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.624982  121576 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0128 18:36:29.624999  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.625009  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.625016  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.625025  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.625033  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.625040  121576 round_trippers.go:580]     Content-Length: 263
	I0128 18:36:29.625047  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.625054  121576 round_trippers.go:580]     Audit-Id: ced58f6f-f49c-472f-8174-66cd7431a080
	I0128 18:36:29.625075  121576 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0128 18:36:29.625171  121576 api_server.go:140] control plane version: v1.26.1
	I0128 18:36:29.625185  121576 api_server.go:130] duration metric: took 5.211798ms to wait for apiserver health ...
	I0128 18:36:29.625195  121576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 18:36:29.806582  121576 request.go:622] Waited for 181.319721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0128 18:36:29.806628  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0128 18:36:29.806634  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:29.806654  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:29.806682  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:29.809887  121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0128 18:36:29.809907  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:29.809915  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:29.809921  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:29.809927  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:29.809933  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:29.809938  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:29 GMT
	I0128 18:36:29.809944  121576 round_trippers.go:580]     Audit-Id: a777a3b2-022e-4eae-b8f2-44e8b190b09e
	I0128 18:36:29.810438  121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54993 chars]
	I0128 18:36:29.812177  121576 system_pods.go:59] 8 kube-system pods found
	I0128 18:36:29.812197  121576 system_pods.go:61] "coredns-787d4945fb-c28p8" [d87aee89-96d2-4627-a7ec-00a4d69653aa] Running
	I0128 18:36:29.812202  121576 system_pods.go:61] "etcd-multinode-052675" [cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d] Running
	I0128 18:36:29.812207  121576 system_pods.go:61] "kindnet-8pkk5" [195e6421-dfdc-4781-bf15-3aa74552b4f8] Running
	I0128 18:36:29.812212  121576 system_pods.go:61] "kube-apiserver-multinode-052675" [c9b8edb5-77fc-4191-b470-8a73c76a3a73] Running
	I0128 18:36:29.812217  121576 system_pods.go:61] "kube-controller-manager-multinode-052675" [6dd849f3-f4b3-4704-a3c5-671cb6a2350c] Running
	I0128 18:36:29.812225  121576 system_pods.go:61] "kube-proxy-hz5nz" [85457440-94b9-4686-be3e-dc5b5cbc0fbb] Running
	I0128 18:36:29.812231  121576 system_pods.go:61] "kube-scheduler-multinode-052675" [b93c851a-ef3e-45a2-88b6-08bf615609f3] Running
	I0128 18:36:29.812237  121576 system_pods.go:61] "storage-provisioner" [c317fca6-6da2-4fa0-9db8-6caf19aebf98] Running
	I0128 18:36:29.812242  121576 system_pods.go:74] duration metric: took 187.042112ms to wait for pod list to return data ...
	I0128 18:36:29.812252  121576 default_sa.go:34] waiting for default service account to be created ...
	I0128 18:36:30.006534  121576 request.go:622] Waited for 194.214569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0128 18:36:30.006619  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0128 18:36:30.006627  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:30.006639  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:30.006650  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:30.008947  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:30.008969  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:30.008979  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:30.008987  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:30.008995  121576 round_trippers.go:580]     Content-Length: 261
	I0128 18:36:30.009004  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:30 GMT
	I0128 18:36:30.009017  121576 round_trippers.go:580]     Audit-Id: 105177b7-80c4-47ba-80d3-14b9590892be
	I0128 18:36:30.009029  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:30.009042  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:30.009073  121576 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0deb750e-d81d-409e-bde7-902fc8bf838b","resourceVersion":"336","creationTimestamp":"2023-01-28T18:36:18Z"}}]}
	I0128 18:36:30.009248  121576 default_sa.go:45] found service account: "default"
	I0128 18:36:30.009261  121576 default_sa.go:55] duration metric: took 197.00171ms for default service account to be created ...
	I0128 18:36:30.009270  121576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0128 18:36:30.206720  121576 request.go:622] Waited for 197.378995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0128 18:36:30.206796  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0128 18:36:30.206805  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:30.206814  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:30.206824  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:30.210023  121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0128 18:36:30.210046  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:30.210053  121576 round_trippers.go:580]     Audit-Id: 611e31ce-60ee-42e1-88af-be34369063da
	I0128 18:36:30.210059  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:30.210064  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:30.210073  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:30.210079  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:30.210088  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:30 GMT
	I0128 18:36:30.210541  121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54993 chars]
	I0128 18:36:30.212203  121576 system_pods.go:86] 8 kube-system pods found
	I0128 18:36:30.212221  121576 system_pods.go:89] "coredns-787d4945fb-c28p8" [d87aee89-96d2-4627-a7ec-00a4d69653aa] Running
	I0128 18:36:30.212226  121576 system_pods.go:89] "etcd-multinode-052675" [cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d] Running
	I0128 18:36:30.212231  121576 system_pods.go:89] "kindnet-8pkk5" [195e6421-dfdc-4781-bf15-3aa74552b4f8] Running
	I0128 18:36:30.212235  121576 system_pods.go:89] "kube-apiserver-multinode-052675" [c9b8edb5-77fc-4191-b470-8a73c76a3a73] Running
	I0128 18:36:30.212239  121576 system_pods.go:89] "kube-controller-manager-multinode-052675" [6dd849f3-f4b3-4704-a3c5-671cb6a2350c] Running
	I0128 18:36:30.212243  121576 system_pods.go:89] "kube-proxy-hz5nz" [85457440-94b9-4686-be3e-dc5b5cbc0fbb] Running
	I0128 18:36:30.212247  121576 system_pods.go:89] "kube-scheduler-multinode-052675" [b93c851a-ef3e-45a2-88b6-08bf615609f3] Running
	I0128 18:36:30.212251  121576 system_pods.go:89] "storage-provisioner" [c317fca6-6da2-4fa0-9db8-6caf19aebf98] Running
	I0128 18:36:30.212257  121576 system_pods.go:126] duration metric: took 202.982661ms to wait for k8s-apps to be running ...
	I0128 18:36:30.212263  121576 system_svc.go:44] waiting for kubelet service to be running ....
	I0128 18:36:30.212300  121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 18:36:30.221949  121576 system_svc.go:56] duration metric: took 9.674112ms WaitForService to wait for kubelet.
	I0128 18:36:30.221971  121576 kubeadm.go:578] duration metric: took 10.548199142s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0128 18:36:30.221989  121576 node_conditions.go:102] verifying NodePressure condition ...
	I0128 18:36:30.406392  121576 request.go:622] Waited for 184.323187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0128 18:36:30.406441  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0128 18:36:30.406446  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:30.406453  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:30.406459  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:30.408669  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:30.408690  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:30.408697  121576 round_trippers.go:580]     Audit-Id: 109f5a11-ecaa-4210-857b-85b7807b1975
	I0128 18:36:30.408703  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:30.408708  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:30.408713  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:30.408719  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:30.408725  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:30 GMT
	I0128 18:36:30.408863  121576 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5054 chars]
	I0128 18:36:30.409231  121576 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0128 18:36:30.409253  121576 node_conditions.go:123] node cpu capacity is 8
	I0128 18:36:30.409268  121576 node_conditions.go:105] duration metric: took 187.274258ms to run NodePressure ...
	I0128 18:36:30.409294  121576 start.go:228] waiting for startup goroutines ...
	I0128 18:36:30.409303  121576 start.go:233] waiting for cluster config update ...
	I0128 18:36:30.409315  121576 start.go:240] writing updated cluster config ...
	I0128 18:36:30.412218  121576 out.go:177] 
	I0128 18:36:30.414078  121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:36:30.414155  121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
	I0128 18:36:30.416389  121576 out.go:177] * Starting worker node multinode-052675-m02 in cluster multinode-052675
	I0128 18:36:30.417861  121576 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 18:36:30.419573  121576 out.go:177] * Pulling base image ...
	I0128 18:36:30.421938  121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 18:36:30.421971  121576 cache.go:57] Caching tarball of preloaded images
	I0128 18:36:30.422040  121576 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 18:36:30.422072  121576 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 18:36:30.422083  121576 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 18:36:30.422167  121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
	I0128 18:36:30.444938  121576 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 18:36:30.444960  121576 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 18:36:30.444977  121576 cache.go:193] Successfully downloaded all kic artifacts
	I0128 18:36:30.445006  121576 start.go:364] acquiring machines lock for multinode-052675-m02: {Name:mk6ab41f77e252b7e855a5b64fa8f991c0831770 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 18:36:30.445103  121576 start.go:368] acquired machines lock for "multinode-052675-m02" in 78.661µs
	I0128 18:36:30.445125  121576 start.go:93] Provisioning new machine with config: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0128 18:36:30.445200  121576 start.go:125] createHost starting for "m02" (driver="docker")
	I0128 18:36:30.447846  121576 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0128 18:36:30.447967  121576 start.go:159] libmachine.API.Create for "multinode-052675" (driver="docker")
	I0128 18:36:30.447995  121576 client.go:168] LocalClient.Create starting
	I0128 18:36:30.448068  121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem
	I0128 18:36:30.448095  121576 main.go:141] libmachine: Decoding PEM data...
	I0128 18:36:30.448111  121576 main.go:141] libmachine: Parsing certificate...
	I0128 18:36:30.448172  121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem
	I0128 18:36:30.448189  121576 main.go:141] libmachine: Decoding PEM data...
	I0128 18:36:30.448203  121576 main.go:141] libmachine: Parsing certificate...
	I0128 18:36:30.448388  121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 18:36:30.471277  121576 network_create.go:76] Found existing network {name:multinode-052675 subnet:0xc000ff7b90 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0128 18:36:30.471311  121576 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-052675-m02" container
	I0128 18:36:30.471361  121576 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 18:36:30.495006  121576 cli_runner.go:164] Run: docker volume create multinode-052675-m02 --label name.minikube.sigs.k8s.io=multinode-052675-m02 --label created_by.minikube.sigs.k8s.io=true
	I0128 18:36:30.517466  121576 oci.go:103] Successfully created a docker volume multinode-052675-m02
	I0128 18:36:30.517533  121576 cli_runner.go:164] Run: docker run --rm --name multinode-052675-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675-m02 --entrypoint /usr/bin/test -v multinode-052675-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 18:36:31.069870  121576 oci.go:107] Successfully prepared a docker volume multinode-052675-m02
	I0128 18:36:31.069909  121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 18:36:31.069928  121576 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 18:36:31.069992  121576 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 18:36:36.023423  121576 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (4.953360732s)
	I0128 18:36:36.023459  121576 kic.go:199] duration metric: took 4.953526 seconds to extract preloaded images to volume
	W0128 18:36:36.023616  121576 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0128 18:36:36.023730  121576 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 18:36:36.124263  121576 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-052675-m02 --name multinode-052675-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-052675-m02 --network multinode-052675 --ip 192.168.58.3 --volume multinode-052675-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 18:36:36.498023  121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Running}}
	I0128 18:36:36.526995  121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
	I0128 18:36:36.551820  121576 cli_runner.go:164] Run: docker exec multinode-052675-m02 stat /var/lib/dpkg/alternatives/iptables
	I0128 18:36:36.602014  121576 oci.go:144] the created container "multinode-052675-m02" has a running status.
	I0128 18:36:36.602046  121576 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa...
	I0128 18:36:36.807554  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0128 18:36:36.807595  121576 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 18:36:36.879258  121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
	I0128 18:36:36.914092  121576 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 18:36:36.914117  121576 kic_runner.go:114] Args: [docker exec --privileged multinode-052675-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 18:36:36.992079  121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
	I0128 18:36:37.019415  121576 machine.go:88] provisioning docker machine ...
	I0128 18:36:37.019455  121576 ubuntu.go:169] provisioning hostname "multinode-052675-m02"
	I0128 18:36:37.019541  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:37.044812  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:36:37.044976  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0128 18:36:37.044998  121576 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-052675-m02 && echo "multinode-052675-m02" | sudo tee /etc/hostname
	I0128 18:36:37.186481  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675-m02
	
	I0128 18:36:37.186556  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:37.211536  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:36:37.211680  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0128 18:36:37.211697  121576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-052675-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-052675-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 18:36:37.340333  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 18:36:37.340363  121576 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
	I0128 18:36:37.340383  121576 ubuntu.go:177] setting up certificates
	I0128 18:36:37.340394  121576 provision.go:83] configureAuth start
	I0128 18:36:37.340511  121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m02
	I0128 18:36:37.364511  121576 provision.go:138] copyHostCerts
	I0128 18:36:37.364549  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
	I0128 18:36:37.364576  121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
	I0128 18:36:37.364581  121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
	I0128 18:36:37.364647  121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
	I0128 18:36:37.364725  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
	I0128 18:36:37.364741  121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
	I0128 18:36:37.364744  121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
	I0128 18:36:37.364766  121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
	I0128 18:36:37.364818  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
	I0128 18:36:37.364832  121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
	I0128 18:36:37.364839  121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
	I0128 18:36:37.364859  121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
	I0128 18:36:37.364916  121576 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675-m02]
	I0128 18:36:37.465118  121576 provision.go:172] copyRemoteCerts
	I0128 18:36:37.465178  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 18:36:37.465211  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:37.489451  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
	I0128 18:36:37.579876  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0128 18:36:37.579955  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 18:36:37.598767  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0128 18:36:37.598832  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0128 18:36:37.618529  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0128 18:36:37.618593  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 18:36:37.637264  121576 provision.go:86] duration metric: configureAuth took 296.857007ms
	I0128 18:36:37.637293  121576 ubuntu.go:193] setting minikube options for container-runtime
	I0128 18:36:37.637456  121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:36:37.637499  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:37.660948  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:36:37.661131  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0128 18:36:37.661149  121576 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 18:36:37.796548  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 18:36:37.796570  121576 ubuntu.go:71] root file system type: overlay
	I0128 18:36:37.796784  121576 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 18:36:37.796844  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:37.821221  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:36:37.821371  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0128 18:36:37.821432  121576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 18:36:37.961020  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 18:36:37.961085  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:37.985366  121576 main.go:141] libmachine: Using SSH client type: native
	I0128 18:36:37.985514  121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0128 18:36:37.985532  121576 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 18:36:38.645200  121576 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:36:37.955978629 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.58.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 18:36:38.645287  121576 machine.go:91] provisioned docker machine in 1.62584735s
	I0128 18:36:38.645308  121576 client.go:171] LocalClient.Create took 8.197304103s
	I0128 18:36:38.645337  121576 start.go:167] duration metric: libmachine.API.Create for "multinode-052675" took 8.197368977s
	I0128 18:36:38.645364  121576 start.go:300] post-start starting for "multinode-052675-m02" (driver="docker")
	I0128 18:36:38.645385  121576 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 18:36:38.645468  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 18:36:38.645527  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:38.671290  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
	I0128 18:36:38.768512  121576 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 18:36:38.771066  121576 command_runner.go:130] > NAME="Ubuntu"
	I0128 18:36:38.771090  121576 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0128 18:36:38.771097  121576 command_runner.go:130] > ID=ubuntu
	I0128 18:36:38.771103  121576 command_runner.go:130] > ID_LIKE=debian
	I0128 18:36:38.771108  121576 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0128 18:36:38.771112  121576 command_runner.go:130] > VERSION_ID="20.04"
	I0128 18:36:38.771118  121576 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0128 18:36:38.771125  121576 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0128 18:36:38.771130  121576 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0128 18:36:38.771139  121576 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0128 18:36:38.771146  121576 command_runner.go:130] > VERSION_CODENAME=focal
	I0128 18:36:38.771153  121576 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0128 18:36:38.771248  121576 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 18:36:38.771274  121576 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 18:36:38.771287  121576 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 18:36:38.771297  121576 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 18:36:38.771310  121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
	I0128 18:36:38.771369  121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
	I0128 18:36:38.771449  121576 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
	I0128 18:36:38.771464  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /etc/ssl/certs/103532.pem
	I0128 18:36:38.771556  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 18:36:38.778377  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
	I0128 18:36:38.798148  121576 start.go:303] post-start completed in 152.756842ms
	I0128 18:36:38.798538  121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m02
	I0128 18:36:38.822984  121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
	I0128 18:36:38.823284  121576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 18:36:38.823333  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:38.847642  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
	I0128 18:36:38.936817  121576 command_runner.go:130] > 16%!
	(MISSING)I0128 18:36:38.937023  121576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 18:36:38.940834  121576 command_runner.go:130] > 246G
	I0128 18:36:38.940977  121576 start.go:128] duration metric: createHost completed in 8.49576855s
	I0128 18:36:38.940997  121576 start.go:83] releasing machines lock for "multinode-052675-m02", held for 8.495882404s
	I0128 18:36:38.941082  121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m02
	I0128 18:36:38.967252  121576 out.go:177] * Found network options:
	I0128 18:36:38.969215  121576 out.go:177]   - NO_PROXY=192.168.58.2
	W0128 18:36:38.971065  121576 proxy.go:119] fail to check proxy env: Error ip not in block
	W0128 18:36:38.971126  121576 proxy.go:119] fail to check proxy env: Error ip not in block
	I0128 18:36:38.971206  121576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 18:36:38.971245  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:38.971281  121576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 18:36:38.971332  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:36:38.997315  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
	I0128 18:36:38.999813  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
	I0128 18:36:39.121016  121576 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0128 18:36:39.121066  121576 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0128 18:36:39.121078  121576 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0128 18:36:39.121085  121576 command_runner.go:130] > Device: e3h/227d	Inode: 568458      Links: 1
	I0128 18:36:39.121095  121576 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0128 18:36:39.121103  121576 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0128 18:36:39.121111  121576 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0128 18:36:39.121118  121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.814355792 +0000
	I0128 18:36:39.121124  121576 command_runner.go:130] >  Birth: -
	I0128 18:36:39.121191  121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 18:36:39.141685  121576 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 18:36:39.141797  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 18:36:39.148794  121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 18:36:39.161466  121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 18:36:39.177917  121576 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0128 18:36:39.177964  121576 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 18:36:39.177984  121576 start.go:483] detecting cgroup driver to use...
	I0128 18:36:39.178016  121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 18:36:39.178143  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 18:36:39.191245  121576 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0128 18:36:39.191280  121576 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0128 18:36:39.192045  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 18:36:39.200533  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 18:36:39.208865  121576 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 18:36:39.208931  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 18:36:39.216787  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 18:36:39.224383  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 18:36:39.232622  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 18:36:39.240862  121576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 18:36:39.248617  121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 18:36:39.258134  121576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 18:36:39.264408  121576 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0128 18:36:39.264958  121576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 18:36:39.271530  121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:36:39.356085  121576 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 18:36:39.440673  121576 start.go:483] detecting cgroup driver to use...
	I0128 18:36:39.440726  121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 18:36:39.440763  121576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 18:36:39.451995  121576 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0128 18:36:39.452019  121576 command_runner.go:130] > [Unit]
	I0128 18:36:39.452032  121576 command_runner.go:130] > Description=Docker Application Container Engine
	I0128 18:36:39.452040  121576 command_runner.go:130] > Documentation=https://docs.docker.com
	I0128 18:36:39.452047  121576 command_runner.go:130] > BindsTo=containerd.service
	I0128 18:36:39.452056  121576 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0128 18:36:39.452063  121576 command_runner.go:130] > Wants=network-online.target
	I0128 18:36:39.452068  121576 command_runner.go:130] > Requires=docker.socket
	I0128 18:36:39.452072  121576 command_runner.go:130] > StartLimitBurst=3
	I0128 18:36:39.452082  121576 command_runner.go:130] > StartLimitIntervalSec=60
	I0128 18:36:39.452091  121576 command_runner.go:130] > [Service]
	I0128 18:36:39.452101  121576 command_runner.go:130] > Type=notify
	I0128 18:36:39.452110  121576 command_runner.go:130] > Restart=on-failure
	I0128 18:36:39.452120  121576 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0128 18:36:39.452135  121576 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0128 18:36:39.452150  121576 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0128 18:36:39.452160  121576 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0128 18:36:39.452175  121576 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0128 18:36:39.452189  121576 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0128 18:36:39.452204  121576 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0128 18:36:39.452219  121576 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0128 18:36:39.452238  121576 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0128 18:36:39.452248  121576 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0128 18:36:39.452257  121576 command_runner.go:130] > ExecStart=
	I0128 18:36:39.452283  121576 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0128 18:36:39.452295  121576 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0128 18:36:39.452305  121576 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0128 18:36:39.452319  121576 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0128 18:36:39.452327  121576 command_runner.go:130] > LimitNOFILE=infinity
	I0128 18:36:39.452334  121576 command_runner.go:130] > LimitNPROC=infinity
	I0128 18:36:39.452339  121576 command_runner.go:130] > LimitCORE=infinity
	I0128 18:36:39.452351  121576 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0128 18:36:39.452363  121576 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0128 18:36:39.452373  121576 command_runner.go:130] > TasksMax=infinity
	I0128 18:36:39.452384  121576 command_runner.go:130] > TimeoutStartSec=0
	I0128 18:36:39.452397  121576 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0128 18:36:39.452406  121576 command_runner.go:130] > Delegate=yes
	I0128 18:36:39.452420  121576 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0128 18:36:39.452432  121576 command_runner.go:130] > KillMode=process
	I0128 18:36:39.452453  121576 command_runner.go:130] > [Install]
	I0128 18:36:39.452460  121576 command_runner.go:130] > WantedBy=multi-user.target
	I0128 18:36:39.452486  121576 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 18:36:39.452534  121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 18:36:39.461413  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 18:36:39.473228  121576 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0128 18:36:39.473259  121576 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0128 18:36:39.474224  121576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 18:36:39.572515  121576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 18:36:39.656210  121576 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 18:36:39.656252  121576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 18:36:39.679177  121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:36:39.758690  121576 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 18:36:39.968276  121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 18:36:40.053216  121576 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0128 18:36:40.053283  121576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 18:36:40.127939  121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 18:36:40.200235  121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 18:36:40.278497  121576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 18:36:40.290297  121576 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 18:36:40.290366  121576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 18:36:40.293667  121576 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0128 18:36:40.293695  121576 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0128 18:36:40.293722  121576 command_runner.go:130] > Device: ech/236d	Inode: 206         Links: 1
	I0128 18:36:40.293732  121576 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0128 18:36:40.293743  121576 command_runner.go:130] > Access: 2023-01-28 18:36:40.284205966 +0000
	I0128 18:36:40.293751  121576 command_runner.go:130] > Modify: 2023-01-28 18:36:40.284205966 +0000
	I0128 18:36:40.293763  121576 command_runner.go:130] > Change: 2023-01-28 18:36:40.288206356 +0000
	I0128 18:36:40.293772  121576 command_runner.go:130] >  Birth: -
	I0128 18:36:40.293793  121576 start.go:551] Will wait 60s for crictl version
	I0128 18:36:40.293836  121576 ssh_runner.go:195] Run: which crictl
	I0128 18:36:40.296677  121576 command_runner.go:130] > /usr/bin/crictl
	I0128 18:36:40.296754  121576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 18:36:40.389499  121576 command_runner.go:130] > Version:  0.1.0
	I0128 18:36:40.389524  121576 command_runner.go:130] > RuntimeName:  docker
	I0128 18:36:40.389532  121576 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0128 18:36:40.389561  121576 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0128 18:36:40.391399  121576 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 18:36:40.391465  121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 18:36:40.418729  121576 command_runner.go:130] > 20.10.23
	I0128 18:36:40.420063  121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 18:36:40.446068  121576 command_runner.go:130] > 20.10.23
	I0128 18:36:40.448985  121576 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 18:36:40.451061  121576 out.go:177]   - env NO_PROXY=192.168.58.2
	I0128 18:36:40.452613  121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 18:36:40.475321  121576 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0128 18:36:40.478656  121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 18:36:40.488113  121576 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.3
	I0128 18:36:40.488148  121576 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:36:40.488269  121576 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
	I0128 18:36:40.488305  121576 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
	I0128 18:36:40.488316  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0128 18:36:40.488329  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0128 18:36:40.488339  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0128 18:36:40.488349  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0128 18:36:40.488393  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
	W0128 18:36:40.488420  121576 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
	I0128 18:36:40.488429  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 18:36:40.488489  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
	I0128 18:36:40.488515  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
	I0128 18:36:40.488536  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
	I0128 18:36:40.488577  121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
	I0128 18:36:40.488613  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem -> /usr/share/ca-certificates/10353.pem
	I0128 18:36:40.488626  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /usr/share/ca-certificates/103532.pem
	I0128 18:36:40.488638  121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:36:40.488957  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 18:36:40.507014  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 18:36:40.524478  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 18:36:40.543353  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 18:36:40.560429  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
	I0128 18:36:40.578015  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
	I0128 18:36:40.595313  121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 18:36:40.613204  121576 ssh_runner.go:195] Run: openssl version
	I0128 18:36:40.618093  121576 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0128 18:36:40.618185  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
	I0128 18:36:40.626593  121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
	I0128 18:36:40.629593  121576 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
	I0128 18:36:40.629631  121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
	I0128 18:36:40.629675  121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
	I0128 18:36:40.634281  121576 command_runner.go:130] > 3ec20f2e
	I0128 18:36:40.634474  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 18:36:40.641620  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 18:36:40.648710  121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:36:40.651424  121576 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:36:40.651520  121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:36:40.651563  121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 18:36:40.655814  121576 command_runner.go:130] > b5213941
	I0128 18:36:40.655989  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 18:36:40.662836  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
	I0128 18:36:40.669614  121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
	I0128 18:36:40.672366  121576 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
	I0128 18:36:40.672504  121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
	I0128 18:36:40.672544  121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
	I0128 18:36:40.676971  121576 command_runner.go:130] > 51391683
	I0128 18:36:40.677135  121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
	I0128 18:36:40.683930  121576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 18:36:40.748516  121576 command_runner.go:130] > cgroupfs
	I0128 18:36:40.751678  121576 cni.go:84] Creating CNI manager for ""
	I0128 18:36:40.751703  121576 cni.go:136] 2 nodes found, recommending kindnet
	I0128 18:36:40.751716  121576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 18:36:40.751737  121576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 18:36:40.751904  121576 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-052675-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 18:36:40.751981  121576 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 18:36:40.752026  121576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 18:36:40.758803  121576 command_runner.go:130] > kubeadm
	I0128 18:36:40.758835  121576 command_runner.go:130] > kubectl
	I0128 18:36:40.758841  121576 command_runner.go:130] > kubelet
	I0128 18:36:40.759320  121576 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 18:36:40.759384  121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0128 18:36:40.766330  121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0128 18:36:40.779694  121576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 18:36:40.793186  121576 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0128 18:36:40.796094  121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 18:36:40.806174  121576 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:36:40.806443  121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:36:40.806396  121576 start.go:299] JoinCluster: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 18:36:40.806509  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0128 18:36:40.806559  121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:36:40.830799  121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:36:40.976483  121576 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ylk1e7.vhmzv49ssdy5cgya --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc 
	I0128 18:36:40.976545  121576 start.go:320] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0128 18:36:40.976581  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ylk1e7.vhmzv49ssdy5cgya --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m02"
	I0128 18:36:41.015053  121576 command_runner.go:130] > [preflight] Running pre-flight checks
	I0128 18:36:41.041547  121576 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0128 18:36:41.041577  121576 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0128 18:36:41.041584  121576 command_runner.go:130] > OS: Linux
	I0128 18:36:41.041592  121576 command_runner.go:130] > CGROUPS_CPU: enabled
	I0128 18:36:41.041600  121576 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0128 18:36:41.041607  121576 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0128 18:36:41.041614  121576 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0128 18:36:41.041622  121576 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0128 18:36:41.041631  121576 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0128 18:36:41.041646  121576 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0128 18:36:41.041657  121576 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0128 18:36:41.041667  121576 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0128 18:36:41.124938  121576 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0128 18:36:41.124971  121576 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0128 18:36:41.152033  121576 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 18:36:41.152062  121576 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 18:36:41.152069  121576 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0128 18:36:41.233531  121576 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0128 18:36:42.752176  121576 command_runner.go:130] > This node has joined the cluster:
	I0128 18:36:42.752204  121576 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0128 18:36:42.752210  121576 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0128 18:36:42.752217  121576 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0128 18:36:42.754759  121576 command_runner.go:130] ! W0128 18:36:41.014600    1345 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0128 18:36:42.754794  121576 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0128 18:36:42.754804  121576 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 18:36:42.754821  121576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ylk1e7.vhmzv49ssdy5cgya --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m02": (1.778225763s)
	I0128 18:36:42.754836  121576 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0128 18:36:42.842093  121576 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0128 18:36:42.918534  121576 start.go:301] JoinCluster complete in 2.112133258s
	I0128 18:36:42.918559  121576 cni.go:84] Creating CNI manager for ""
	I0128 18:36:42.918564  121576 cni.go:136] 2 nodes found, recommending kindnet
	I0128 18:36:42.918600  121576 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0128 18:36:42.921867  121576 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0128 18:36:42.921894  121576 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0128 18:36:42.921910  121576 command_runner.go:130] > Device: 34h/52d	Inode: 566552      Links: 1
	I0128 18:36:42.921920  121576 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0128 18:36:42.921929  121576 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0128 18:36:42.921939  121576 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0128 18:36:42.921946  121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.070283151 +0000
	I0128 18:36:42.921952  121576 command_runner.go:130] >  Birth: -
	I0128 18:36:42.921998  121576 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0128 18:36:42.922009  121576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0128 18:36:42.935745  121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0128 18:36:43.088293  121576 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0128 18:36:43.091858  121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0128 18:36:43.094168  121576 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0128 18:36:43.106856  121576 command_runner.go:130] > daemonset.apps/kindnet configured
	I0128 18:36:43.110918  121576 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:36:43.111135  121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 18:36:43.111398  121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0128 18:36:43.111408  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.111416  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.111422  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.113568  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:43.113592  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.113601  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.113609  121576 round_trippers.go:580]     Audit-Id: 0d4f7448-78d2-45da-baa4-0f8b4b1bc78d
	I0128 18:36:43.113617  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.113625  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.113634  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.113650  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.113660  121576 round_trippers.go:580]     Content-Length: 291
	I0128 18:36:43.113702  121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"428","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0128 18:36:43.113815  121576 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-052675" context rescaled to 1 replicas
	I0128 18:36:43.113846  121576 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0128 18:36:43.117234  121576 out.go:177] * Verifying Kubernetes components...
	I0128 18:36:43.119329  121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 18:36:43.129543  121576 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:36:43.129791  121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 18:36:43.130045  121576 node_ready.go:35] waiting up to 6m0s for node "multinode-052675-m02" to be "Ready" ...
	I0128 18:36:43.130114  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
	I0128 18:36:43.130124  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.130131  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.130138  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.132268  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:43.132297  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.132305  121576 round_trippers.go:580]     Audit-Id: 46e35ead-b028-4a54-9a6a-c2cc83b6a177
	I0128 18:36:43.132314  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.132324  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.132333  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.132346  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.132355  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.132513  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
	I0128 18:36:43.132879  121576 node_ready.go:49] node "multinode-052675-m02" has status "Ready":"True"
	I0128 18:36:43.132909  121576 node_ready.go:38] duration metric: took 2.838167ms waiting for node "multinode-052675-m02" to be "Ready" ...
	I0128 18:36:43.132923  121576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 18:36:43.133010  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0128 18:36:43.133021  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.133031  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.133043  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.136130  121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0128 18:36:43.136160  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.136175  121576 round_trippers.go:580]     Audit-Id: 6cb55554-134f-4dfc-87d5-a387dab56006
	I0128 18:36:43.136183  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.136195  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.136208  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.136219  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.136234  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.136716  121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"472"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65332 chars]
	I0128 18:36:43.138683  121576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.138742  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
	I0128 18:36:43.138750  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.138757  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.138771  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.140963  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:43.140986  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.140994  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.141004  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.141012  121576 round_trippers.go:580]     Audit-Id: 4ba6aa82-e083-416c-bac1-7840ffd40ca0
	I0128 18:36:43.141024  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.141036  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.141047  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.141146  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5942 chars]
	I0128 18:36:43.141763  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:43.141781  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.141792  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.141802  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.143630  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:43.143651  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.143657  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.143663  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.143668  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.143678  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.143683  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.143691  121576 round_trippers.go:580]     Audit-Id: 73011a9e-1d73-42f2-9dbd-154177538634
	I0128 18:36:43.143784  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
	I0128 18:36:43.144065  121576 pod_ready.go:92] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:43.144078  121576 pod_ready.go:81] duration metric: took 5.377157ms waiting for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.144087  121576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.144128  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-052675
	I0128 18:36:43.144135  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.144142  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.144149  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.145814  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:43.145835  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.145846  121576 round_trippers.go:580]     Audit-Id: ec1f57af-0d79-4d62-b208-bcbd3e3e4819
	I0128 18:36:43.145853  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.145861  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.145867  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.145879  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.145891  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.145992  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-052675","namespace":"kube-system","uid":"cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d","resourceVersion":"261","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.mirror":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.seen":"2023-01-28T18:36:05.844239404Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0128 18:36:43.146372  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:43.146385  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.146392  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.146399  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.147987  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:43.148008  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.148017  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.148023  121576 round_trippers.go:580]     Audit-Id: 9739b17f-cb68-4aed-931c-be147d044104
	I0128 18:36:43.148031  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.148044  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.148059  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.148068  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.148175  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
	I0128 18:36:43.148474  121576 pod_ready.go:92] pod "etcd-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:43.148488  121576 pod_ready.go:81] duration metric: took 4.39583ms waiting for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.148501  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.148537  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-052675
	I0128 18:36:43.148544  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.148551  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.148557  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.150306  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:43.150340  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.150352  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.150362  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.150371  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.150382  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.150390  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.150398  121576 round_trippers.go:580]     Audit-Id: 924049d3-2248-4162-9eb1-bd8752c395b4
	I0128 18:36:43.150521  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-052675","namespace":"kube-system","uid":"c9b8edb5-77fc-4191-b470-8a73c76a3a73","resourceVersion":"291","creationTimestamp":"2023-01-28T18:36:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.mirror":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.seen":"2023-01-28T18:35:55.862480624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0128 18:36:43.150919  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:43.150932  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.150938  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.150945  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.152524  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:43.152547  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.152556  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.152565  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.152580  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.152588  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.152597  121576 round_trippers.go:580]     Audit-Id: 411ade94-1797-4a2f-bc5e-2b870c32eb22
	I0128 18:36:43.152606  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.152684  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
	I0128 18:36:43.152978  121576 pod_ready.go:92] pod "kube-apiserver-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:43.152996  121576 pod_ready.go:81] duration metric: took 4.490065ms waiting for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.153005  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.153046  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-052675
	I0128 18:36:43.153053  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.153059  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.153065  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.154708  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:43.154724  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.154731  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.154738  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.154746  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.154763  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.154772  121576 round_trippers.go:580]     Audit-Id: c63dabd3-7958-4cdc-b20a-aad5bdf90d09
	I0128 18:36:43.154780  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.154905  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-052675","namespace":"kube-system","uid":"6dd849f3-f4b3-4704-a3c5-671cb6a2350c","resourceVersion":"276","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.mirror":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.seen":"2023-01-28T18:36:05.844267614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0128 18:36:43.155325  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:43.155337  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.155344  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.155351  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.156837  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:43.156851  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.156858  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.156863  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.156868  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.156873  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.156878  121576 round_trippers.go:580]     Audit-Id: adb7b46b-4b8d-4355-8f33-79e60a6e24cb
	I0128 18:36:43.156883  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.156946  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
	I0128 18:36:43.157201  121576 pod_ready.go:92] pod "kube-controller-manager-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:43.157211  121576 pod_ready.go:81] duration metric: took 4.198488ms waiting for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.157218  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8btnm" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:43.330638  121576 request.go:622] Waited for 173.322814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
	I0128 18:36:43.330687  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
	I0128 18:36:43.330691  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.330698  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.330705  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.332919  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:43.332944  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.332954  121576 round_trippers.go:580]     Audit-Id: a1683f59-f481-4bfb-8c5b-3116e080cf41
	I0128 18:36:43.332962  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.332969  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.332978  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.332986  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.332995  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.333113  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"458","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0128 18:36:43.530918  121576 request.go:622] Waited for 197.36611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
	I0128 18:36:43.530965  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
	I0128 18:36:43.530972  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:43.530980  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:43.530986  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:43.533260  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:43.533324  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:43.533340  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:43.533348  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:43.533355  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:43.533364  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:43 GMT
	I0128 18:36:43.533370  121576 round_trippers.go:580]     Audit-Id: bf890605-ab19-4fb0-a42c-34089281c630
	I0128 18:36:43.533378  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:43.533463  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
	I0128 18:36:44.035366  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
	I0128 18:36:44.035392  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:44.035404  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:44.035414  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:44.037983  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:44.038016  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:44.038031  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:44.038040  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:44.038047  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:44.038056  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:44.038070  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:44 GMT
	I0128 18:36:44.038080  121576 round_trippers.go:580]     Audit-Id: eec19d04-62c0-4f95-912d-69519fc965be
	I0128 18:36:44.038248  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"475","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0128 18:36:44.038868  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
	I0128 18:36:44.038889  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:44.038902  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:44.038912  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:44.041472  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:44.041498  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:44.041507  121576 round_trippers.go:580]     Audit-Id: 9df2c3c2-81a4-40f4-8a43-2208d0bc4cf1
	I0128 18:36:44.041515  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:44.041523  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:44.041531  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:44.041546  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:44.041558  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:44 GMT
	I0128 18:36:44.041675  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
	I0128 18:36:44.535255  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
	I0128 18:36:44.535278  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:44.535290  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:44.535299  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:44.539063  121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0128 18:36:44.539095  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:44.539107  121576 round_trippers.go:580]     Audit-Id: 63d8c775-c92f-493b-80d6-f6f63dc44ad8
	I0128 18:36:44.539117  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:44.539127  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:44.539137  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:44.539146  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:44.539157  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:44 GMT
	I0128 18:36:44.539292  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"475","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0128 18:36:44.539876  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
	I0128 18:36:44.539894  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:44.539904  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:44.539917  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:44.541755  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:44.541776  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:44.541786  121576 round_trippers.go:580]     Audit-Id: a17e8604-aac9-4536-9b63-e678db583453
	I0128 18:36:44.541792  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:44.541797  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:44.541802  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:44.541809  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:44.541826  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:44 GMT
	I0128 18:36:44.541913  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
	I0128 18:36:45.034523  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
	I0128 18:36:45.034542  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:45.034550  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:45.034556  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:45.036646  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:45.036668  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:45.036679  121576 round_trippers.go:580]     Audit-Id: 0144aace-231f-4c28-bc5d-15e161d7ea9c
	I0128 18:36:45.036686  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:45.036692  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:45.036697  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:45.036707  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:45.036712  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:45 GMT
	I0128 18:36:45.036829  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"483","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0128 18:36:45.037323  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
	I0128 18:36:45.037335  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:45.037342  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:45.037348  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:45.038929  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:45.038948  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:45.038957  121576 round_trippers.go:580]     Audit-Id: 58d8d617-5cad-43cf-a5f2-1b63ff74a55c
	I0128 18:36:45.038964  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:45.038972  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:45.038981  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:45.038992  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:45.039008  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:45 GMT
	I0128 18:36:45.039097  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
	I0128 18:36:45.039377  121576 pod_ready.go:92] pod "kube-proxy-8btnm" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:45.039397  121576 pod_ready.go:81] duration metric: took 1.882175089s waiting for pod "kube-proxy-8btnm" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:45.039407  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:45.039449  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hz5nz
	I0128 18:36:45.039456  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:45.039463  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:45.039469  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:45.041062  121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0128 18:36:45.041082  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:45.041089  121576 round_trippers.go:580]     Audit-Id: d2e1a54f-3c5c-40fb-99aa-3bbce32a3ef7
	I0128 18:36:45.041097  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:45.041104  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:45.041112  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:45.041130  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:45.041138  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:45 GMT
	I0128 18:36:45.041258  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hz5nz","generateName":"kube-proxy-","namespace":"kube-system","uid":"85457440-94b9-4686-be3e-dc5b5cbc0fbb","resourceVersion":"390","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0128 18:36:45.130879  121576 request.go:622] Waited for 89.230928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:45.130930  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:45.130935  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:45.130943  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:45.130949  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:45.133379  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:45.133407  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:45.133418  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:45 GMT
	I0128 18:36:45.133425  121576 round_trippers.go:580]     Audit-Id: b2c70ba2-dbbd-43d5-b0ea-915e4e5ca6e2
	I0128 18:36:45.133431  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:45.133436  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:45.133442  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:45.133451  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:45.133544  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
	I0128 18:36:45.133878  121576 pod_ready.go:92] pod "kube-proxy-hz5nz" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:45.133891  121576 pod_ready.go:81] duration metric: took 94.475521ms waiting for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:45.133901  121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:45.330258  121576 request.go:622] Waited for 196.285598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
	I0128 18:36:45.330337  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
	I0128 18:36:45.330347  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:45.330360  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:45.330373  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:45.333422  121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0128 18:36:45.333452  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:45.333464  121576 round_trippers.go:580]     Audit-Id: 9e88eb74-afdf-4872-98fd-db150d835c02
	I0128 18:36:45.333473  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:45.333482  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:45.333491  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:45.333503  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:45.333510  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:45 GMT
	I0128 18:36:45.333659  121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-052675","namespace":"kube-system","uid":"b93c851a-ef3e-45a2-88b6-08bf615609f3","resourceVersion":"263","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.mirror":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.seen":"2023-01-28T18:36:05.844268554Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0128 18:36:45.530517  121576 request.go:622] Waited for 196.35734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:45.530578  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
	I0128 18:36:45.530589  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:45.530602  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:45.530616  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:45.533138  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:45.533159  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:45.533169  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:45.533178  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:45.533187  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:45 GMT
	I0128 18:36:45.533201  121576 round_trippers.go:580]     Audit-Id: 17906d91-9566-4c00-bcdc-7baa438bfb0a
	I0128 18:36:45.533214  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:45.533227  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:45.533346  121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
	I0128 18:36:45.533760  121576 pod_ready.go:92] pod "kube-scheduler-multinode-052675" in "kube-system" namespace has status "Ready":"True"
	I0128 18:36:45.533779  121576 pod_ready.go:81] duration metric: took 399.869077ms waiting for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
	I0128 18:36:45.533794  121576 pod_ready.go:38] duration metric: took 2.40085647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 18:36:45.533817  121576 system_svc.go:44] waiting for kubelet service to be running ....
	I0128 18:36:45.533866  121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 18:36:45.574170  121576 system_svc.go:56] duration metric: took 40.343931ms WaitForService to wait for kubelet.
	I0128 18:36:45.574201  121576 kubeadm.go:578] duration metric: took 2.460321991s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0128 18:36:45.574226  121576 node_conditions.go:102] verifying NodePressure condition ...
	I0128 18:36:45.730698  121576 request.go:622] Waited for 156.388209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0128 18:36:45.730772  121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0128 18:36:45.730781  121576 round_trippers.go:469] Request Headers:
	I0128 18:36:45.730791  121576 round_trippers.go:473]     Accept: application/json, */*
	I0128 18:36:45.730801  121576 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0128 18:36:45.733356  121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0128 18:36:45.733379  121576 round_trippers.go:577] Response Headers:
	I0128 18:36:45.733390  121576 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
	I0128 18:36:45.733398  121576 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
	I0128 18:36:45.733405  121576 round_trippers.go:580]     Date: Sat, 28 Jan 2023 18:36:45 GMT
	I0128 18:36:45.733414  121576 round_trippers.go:580]     Audit-Id: 698ec73f-1071-439b-becc-e2d689f805e7
	I0128 18:36:45.733421  121576 round_trippers.go:580]     Cache-Control: no-cache, private
	I0128 18:36:45.733430  121576 round_trippers.go:580]     Content-Type: application/json
	I0128 18:36:45.733594  121576 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10269 chars]
	I0128 18:36:45.734082  121576 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0128 18:36:45.734100  121576 node_conditions.go:123] node cpu capacity is 8
	I0128 18:36:45.734113  121576 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0128 18:36:45.734119  121576 node_conditions.go:123] node cpu capacity is 8
	I0128 18:36:45.734124  121576 node_conditions.go:105] duration metric: took 159.892379ms to run NodePressure ...
	I0128 18:36:45.734138  121576 start.go:228] waiting for startup goroutines ...
	I0128 18:36:45.734148  121576 start.go:240] writing updated cluster config ...
	I0128 18:36:45.759094  121576 ssh_runner.go:195] Run: rm -f paused
	I0128 18:36:45.816971  121576 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0128 18:36:45.821156  121576 out.go:177] * Done! kubectl is now configured to use "multinode-052675" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 18:35:45 UTC, end at Sat 2023-01-28 18:39:56 UTC. --
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.465696045Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467831566Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467857592Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467875883Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467884163Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478296235Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478321919Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478327287Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478476443Z" level=info msg="Loading containers: start."
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.557652750Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.599021222Z" level=info msg="Loading containers: done."
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.609500824Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.609559862Z" level=info msg="Daemon has completed initialization"
	Jan 28 18:35:51 multinode-052675 systemd[1]: Started Docker Application Container Engine.
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.628354782Z" level=info msg="API listen on [::]:2376"
	Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.632111306Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 28 18:36:20 multinode-052675 dockerd[928]: time="2023-01-28T18:36:20.204103943Z" level=info msg="ignoring event" container=8a1ae5e27e92612a02b6a8fc51ad3571fa87d2715702914217ad377e0b906466 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:20 multinode-052675 dockerd[928]: time="2023-01-28T18:36:20.592188993Z" level=info msg="ignoring event" container=88ecb2f902999c079288b99bd89bbfab63c88f490278f02ec03640fbb04e976c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:21 multinode-052675 dockerd[928]: time="2023-01-28T18:36:21.105618127Z" level=info msg="ignoring event" container=b89a334adecaefc426798a148bababa79049022aa49faa427d14cf48bc59860e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:22 multinode-052675 dockerd[928]: time="2023-01-28T18:36:22.113395644Z" level=info msg="ignoring event" container=ec7a6aa2ca969bb401287aed4fd63503e2c68a2b830d6f2e4c8b01fd99cc775c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:22 multinode-052675 dockerd[928]: time="2023-01-28T18:36:22.793066702Z" level=info msg="ignoring event" container=d9b7f41b12e705e71977b3f62ede990c0c3fe51cc614aa09dcc559f8918198eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:23 multinode-052675 dockerd[928]: time="2023-01-28T18:36:23.790651650Z" level=info msg="ignoring event" container=5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:24 multinode-052675 dockerd[928]: time="2023-01-28T18:36:24.826643855Z" level=info msg="ignoring event" container=676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:25 multinode-052675 dockerd[928]: time="2023-01-28T18:36:25.833532899Z" level=info msg="ignoring event" container=f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 18:36:26 multinode-052675 dockerd[928]: time="2023-01-28T18:36:26.858053500Z" level=info msg="ignoring event" container=5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	7f0e24e944cec       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Running             busybox                   0                   cce38480cd66f
	93745ccc8cb55       5185b96f0becf                                                                                         3 minutes ago       Running             coredns                   0                   45f0655a1ddb9
	46773a35b11bd       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              3 minutes ago       Running             kindnet-cni               0                   8fc19cc66c4f3
	aeb357b4e2094       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       0                   f88feb83a6497
	acc3adc5776a5       46a6bb3c77ce0                                                                                         3 minutes ago       Running             kube-proxy                0                   54b173c8cf0ca
	a377326949167       deb04688c4a35                                                                                         3 minutes ago       Running             kube-apiserver            0                   65692890c63e7
	2e6c4095a9938       655493523f607                                                                                         3 minutes ago       Running             kube-scheduler            0                   9cef735af13e0
	90ac627c99fcf       e9c08e11b07f6                                                                                         3 minutes ago       Running             kube-controller-manager   0                   de160ca186d78
	c4215b5f1c76b       fce326961ae2d                                                                                         3 minutes ago       Running             etcd                      0                   52885346a4282
	
	* 
	* ==> coredns [93745ccc8cb5] <==
	* [INFO] 10.244.1.2:35138 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130331s
	[INFO] 10.244.0.3:58973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139836s
	[INFO] 10.244.0.3:55407 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001990074s
	[INFO] 10.244.0.3:59073 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118522s
	[INFO] 10.244.0.3:52141 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088195s
	[INFO] 10.244.0.3:35586 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001580209s
	[INFO] 10.244.0.3:43340 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083768s
	[INFO] 10.244.0.3:39293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071122s
	[INFO] 10.244.0.3:59044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065519s
	[INFO] 10.244.1.2:42075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180039s
	[INFO] 10.244.1.2:56436 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108243s
	[INFO] 10.244.1.2:46724 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100908s
	[INFO] 10.244.1.2:40322 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107786s
	[INFO] 10.244.0.3:48174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163856s
	[INFO] 10.244.0.3:38165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096818s
	[INFO] 10.244.0.3:39710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066036s
	[INFO] 10.244.0.3:57439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097139s
	[INFO] 10.244.1.2:43000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164401s
	[INFO] 10.244.1.2:34418 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156548s
	[INFO] 10.244.1.2:52316 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147518s
	[INFO] 10.244.1.2:59610 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100187s
	[INFO] 10.244.0.3:34048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013944s
	[INFO] 10.244.0.3:43257 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112776s
	[INFO] 10.244.0.3:41888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009795s
	[INFO] 10.244.0.3:52087 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008824s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-052675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-052675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7a59349a2d83a39298292bdec73f3c39ac1090
	                    minikube.k8s.io/name=multinode-052675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_28T18_36_06_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 18:36:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-052675
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 18:39:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 18:37:07 +0000   Sat, 28 Jan 2023 18:36:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 18:37:07 +0000   Sat, 28 Jan 2023 18:36:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 18:37:07 +0000   Sat, 28 Jan 2023 18:36:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 18:37:07 +0000   Sat, 28 Jan 2023 18:36:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-052675
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871752Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871752Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1a46cb41c9d45969ef9bdf4a48d9b28
	  System UUID:                59b520aa-117e-4374-90f6-231e5d061c51
	  Boot ID:                    c2f3d462-b386-480a-bd1b-c0d90433fb30
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-g84sq                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 coredns-787d4945fb-c28p8                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m38s
	  kube-system                 etcd-multinode-052675                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-8pkk5                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m38s
	  kube-system                 kube-apiserver-multinode-052675             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-multinode-052675    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-proxy-hz5nz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-scheduler-multinode-052675             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m36s  kube-proxy       
	  Normal  Starting                 3m51s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s  kubelet          Node multinode-052675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s  kubelet          Node multinode-052675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s  kubelet          Node multinode-052675 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m50s  kubelet          Node multinode-052675 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m50s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m40s  kubelet          Node multinode-052675 status is now: NodeReady
	  Normal  RegisteredNode           3m39s  node-controller  Node multinode-052675 event: Registered Node multinode-052675 in Controller
	
	
	Name:               multinode-052675-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-052675-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 18:36:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-052675-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 18:39:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 18:37:12 +0000   Sat, 28 Jan 2023 18:36:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 18:37:12 +0000   Sat, 28 Jan 2023 18:36:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 18:37:12 +0000   Sat, 28 Jan 2023 18:36:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 18:37:12 +0000   Sat, 28 Jan 2023 18:36:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-052675-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871752Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871752Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1a46cb41c9d45969ef9bdf4a48d9b28
	  System UUID:                31460efa-712b-41df-976f-e2d9604391d1
	  Boot ID:                    c2f3d462-b386-480a-bd1b-c0d90433fb30
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-g4wvp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-x4b6m               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m15s
	  kube-system                 kube-proxy-8btnm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m12s                  kube-proxy       
	  Normal  Starting                 3m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m15s (x2 over 3m15s)  kubelet          Node multinode-052675-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x2 over 3m15s)  kubelet          Node multinode-052675-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x2 over 3m15s)  kubelet          Node multinode-052675-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m14s                  node-controller  Node multinode-052675-m02 event: Registered Node multinode-052675-m02 in Controller
	  Normal  NodeReady                3m14s                  kubelet          Node multinode-052675-m02 status is now: NodeReady
	
	
	Name:               multinode-052675-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-052675-m03
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 18:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-052675-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 18:39:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 18:37:46 +0000   Sat, 28 Jan 2023 18:37:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 18:37:46 +0000   Sat, 28 Jan 2023 18:37:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 18:37:46 +0000   Sat, 28 Jan 2023 18:37:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 18:37:46 +0000   Sat, 28 Jan 2023 18:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-052675-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871752Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871752Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1a46cb41c9d45969ef9bdf4a48d9b28
	  System UUID:                8b3e074d-faf8-4a45-9c58-bdde0f022139
	  Boot ID:                    c2f3d462-b386-480a-bd1b-c0d90433fb30
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ncz56       100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m47s
	  kube-system                 kube-proxy-h7dv6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m6s                   kube-proxy  
	  Normal  Starting                 2m44s                  kube-proxy  
	  Normal  Starting                 2m48s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet     Node multinode-052675-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m47s                  kubelet     Node multinode-052675-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet     Node multinode-052675-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet     Node multinode-052675-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m20s (x7 over 2m26s)  kubelet     Node multinode-052675-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x7 over 2m26s)  kubelet     Node multinode-052675-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m26s)  kubelet     Node multinode-052675-m03 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.008762] FS-Cache: O-key=[8] '8da00f0200000000'
	[  +0.006277] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007955] FS-Cache: N-cookie d=00000000cd953fdb{9p.inode} n=00000000b97f67f9
	[  +0.008737] FS-Cache: N-key=[8] '8da00f0200000000'
	[  +3.705268] FS-Cache: Duplicate cookie detected
	[  +0.004702] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006748] FS-Cache: O-cookie d=00000000cd953fdb{9p.inode} n=00000000395d31ad
	[  +0.007360] FS-Cache: O-key=[8] '8ca00f0200000000'
	[  +0.004951] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006702] FS-Cache: N-cookie d=00000000cd953fdb{9p.inode} n=000000005727019b
	[  +0.008767] FS-Cache: N-key=[8] '8ca00f0200000000'
	[  +0.406755] FS-Cache: Duplicate cookie detected
	[  +0.004703] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006860] FS-Cache: O-cookie d=00000000cd953fdb{9p.inode} n=000000009937b098
	[  +0.007457] FS-Cache: O-key=[8] '9aa00f0200000000'
	[  +0.004949] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006560] FS-Cache: N-cookie d=00000000cd953fdb{9p.inode} n=00000000a225fe91
	[  +0.007364] FS-Cache: N-key=[8] '9aa00f0200000000'
	[  +2.415873] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 dd 3d 45 61 a1 08 06
	[Jan28 18:29] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan28 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 41 fe 2b e6 75 08 06
	[Jan28 18:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 24 06 ac 67 52 08 06
	
	* 
	* ==> etcd [c4215b5f1c76] <==
	* {"level":"info","ts":"2023-01-28T18:36:00.408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-01-28T18:36:00.408Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-052675 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  18:39:56 up 22 min,  0 users,  load average: 0.39, 0.95, 0.90
	Linux multinode-052675 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [a37732694916] <==
	* I0128 18:36:02.873375       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0128 18:36:02.895523       1 controller.go:615] quota admission added evaluator for: namespaces
	I0128 18:36:02.903629       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0128 18:36:02.929817       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0128 18:36:02.929861       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0128 18:36:02.930123       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0128 18:36:02.930143       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0128 18:36:02.930447       1 shared_informer.go:280] Caches are synced for configmaps
	I0128 18:36:02.930552       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0128 18:36:03.522813       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0128 18:36:03.735798       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0128 18:36:03.739181       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0128 18:36:03.739197       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0128 18:36:04.165037       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0128 18:36:04.198137       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0128 18:36:04.308790       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0128 18:36:04.317286       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0128 18:36:04.318546       1 controller.go:615] quota admission added evaluator for: endpoints
	I0128 18:36:04.322956       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0128 18:36:04.781709       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0128 18:36:05.773263       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0128 18:36:05.784279       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0128 18:36:05.795029       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0128 18:36:18.250522       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0128 18:36:18.499856       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [90ac627c99fc] <==
	* I0128 18:36:18.759346       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-c28p8"
	I0128 18:36:19.123897       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0128 18:36:19.132020       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-nzbz8"
	W0128 18:36:41.937397       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-052675-m02" does not exist
	I0128 18:36:41.947229       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8btnm"
	I0128 18:36:41.949315       1 range_allocator.go:372] Set node multinode-052675-m02 PodCIDR to [10.244.1.0/24]
	I0128 18:36:41.949483       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-x4b6m"
	W0128 18:36:42.550325       1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
	W0128 18:36:42.699194       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-052675-m02. Assuming now as a timestamp.
	I0128 18:36:42.699264       1 event.go:294] "Event occurred" object="multinode-052675-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-052675-m02 event: Registered Node multinode-052675-m02 in Controller"
	I0128 18:36:46.637286       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0128 18:36:46.647355       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-g4wvp"
	I0128 18:36:46.651040       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-g84sq"
	W0128 18:37:09.288908       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-052675-m03" does not exist
	W0128 18:37:09.289000       1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
	I0128 18:37:09.296110       1 range_allocator.go:372] Set node multinode-052675-m03 PodCIDR to [10.244.2.0/24]
	I0128 18:37:09.298787       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ncz56"
	I0128 18:37:09.298817       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h7dv6"
	W0128 18:37:09.909007       1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m03 node
	I0128 18:37:12.705570       1 event.go:294] "Event occurred" object="multinode-052675-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-052675-m03 event: Registered Node multinode-052675-m03 in Controller"
	W0128 18:37:12.705587       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-052675-m03. Assuming now as a timestamp.
	W0128 18:37:36.220558       1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
	W0128 18:37:36.348964       1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
	W0128 18:37:36.348991       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-052675-m03" does not exist
	I0128 18:37:36.358605       1 range_allocator.go:372] Set node multinode-052675-m03 PodCIDR to [10.244.3.0/24]
	
	* 
	* ==> kube-proxy [acc3adc5776a] <==
	* I0128 18:36:20.495309       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0128 18:36:20.495411       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0128 18:36:20.495442       1 server_others.go:535] "Using iptables proxy"
	I0128 18:36:20.573936       1 server_others.go:176] "Using iptables Proxier"
	I0128 18:36:20.573966       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0128 18:36:20.573974       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0128 18:36:20.574002       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0128 18:36:20.574037       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0128 18:36:20.574517       1 server.go:655] "Version info" version="v1.26.1"
	I0128 18:36:20.574538       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 18:36:20.575067       1 config.go:317] "Starting service config controller"
	I0128 18:36:20.575126       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0128 18:36:20.575129       1 config.go:444] "Starting node config controller"
	I0128 18:36:20.575146       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0128 18:36:20.575128       1 config.go:226] "Starting endpoint slice config controller"
	I0128 18:36:20.575158       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0128 18:36:20.676141       1 shared_informer.go:280] Caches are synced for service config
	I0128 18:36:20.676215       1 shared_informer.go:280] Caches are synced for node config
	I0128 18:36:20.676238       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [2e6c4095a993] <==
	* W0128 18:36:02.879070       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0128 18:36:02.879083       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0128 18:36:02.879174       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0128 18:36:02.879188       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0128 18:36:02.879224       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0128 18:36:02.879240       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0128 18:36:02.879281       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0128 18:36:02.879303       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0128 18:36:02.879350       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0128 18:36:02.879378       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0128 18:36:02.879412       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0128 18:36:02.879427       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0128 18:36:02.879464       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0128 18:36:02.879479       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0128 18:36:02.879490       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0128 18:36:02.879503       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0128 18:36:03.698225       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0128 18:36:03.698272       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0128 18:36:03.781454       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0128 18:36:03.781488       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0128 18:36:03.828609       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0128 18:36:03.828643       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0128 18:36:03.831588       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0128 18:36:03.831616       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0128 18:36:04.274555       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 18:35:45 UTC, end at Sat 2023-01-28 18:39:57 UTC. --
	Jan 28 18:36:23 multinode-052675 kubelet[2333]: E0128 18:36:23.817000    2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
	Jan 28 18:36:23 multinode-052675 kubelet[2333]: E0128 18:36:23.817082    2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
	Jan 28 18:36:23 multinode-052675 kubelet[2333]: I0128 18:36:23.920689    2333 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ed0eb028-4b66-4332-b5b0-368ffd3e7e15 path="/var/lib/kubelet/pods/ed0eb028-4b66-4332-b5b0-368ffd3e7e15/volumes"
	Jan 28 18:36:24 multinode-052675 kubelet[2333]: I0128 18:36:24.557757    2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441"
	Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853588    2333 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\""
	Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853658    2333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
	Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853697    2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
	Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853772    2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
	Jan 28 18:36:25 multinode-052675 kubelet[2333]: I0128 18:36:25.572409    2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e"
	Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864582    2333 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\""
	Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864650    2333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
	Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864678    2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
	Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864741    2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
	Jan 28 18:36:26 multinode-052675 kubelet[2333]: I0128 18:36:26.388050    2333 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 28 18:36:26 multinode-052675 kubelet[2333]: I0128 18:36:26.388665    2333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 28 18:36:26 multinode-052675 kubelet[2333]: I0128 18:36:26.586132    2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469"
	Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890175    2333 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\""
	Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890253    2333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
	Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890289    2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
	Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890376    2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
	Jan 28 18:36:27 multinode-052675 kubelet[2333]: I0128 18:36:27.602763    2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc"
	Jan 28 18:36:28 multinode-052675 kubelet[2333]: I0128 18:36:28.641347    2333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-c28p8" podStartSLOduration=10.641292152 pod.CreationTimestamp="2023-01-28 18:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 18:36:28.641089485 +0000 UTC m=+22.889728002" watchObservedRunningTime="2023-01-28 18:36:28.641292152 +0000 UTC m=+22.889930687"
	Jan 28 18:36:46 multinode-052675 kubelet[2333]: I0128 18:36:46.657757    2333 topology_manager.go:210] "Topology Admit Handler"
	Jan 28 18:36:46 multinode-052675 kubelet[2333]: I0128 18:36:46.827090    2333 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zszks\" (UniqueName: \"kubernetes.io/projected/07aca5c2-c0d3-4c53-92e8-47705123ffd3-kube-api-access-zszks\") pod \"busybox-6b86dd6d48-g84sq\" (UID: \"07aca5c2-c0d3-4c53-92e8-47705123ffd3\") " pod="default/busybox-6b86dd6d48-g84sq"
	Jan 28 18:36:48 multinode-052675 kubelet[2333]: I0128 18:36:48.786444    2333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-g84sq" podStartSLOduration=-9.223372034068388e+09 pod.CreationTimestamp="2023-01-28 18:36:46 +0000 UTC" firstStartedPulling="2023-01-28 18:36:47.242166223 +0000 UTC m=+41.490804732" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 18:36:48.786118931 +0000 UTC m=+43.034757446" watchObservedRunningTime="2023-01-28 18:36:48.786388476 +0000 UTC m=+43.035026992"
	
	* 
	* ==> storage-provisioner [aeb357b4e209] <==
	* I0128 18:36:21.407911       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0128 18:36:21.417104       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0128 18:36:21.417185       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0128 18:36:21.480316       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0128 18:36:21.480478       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b79d23c8-285b-4959-abf4-ca24577373ed", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-052675_4d6fd3f0-4271-421b-bde5-7ca41a58c0d6 became leader
	I0128 18:36:21.480523       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-052675_4d6fd3f0-4271-421b-bde5-7ca41a58c0d6!
	I0128 18:36:21.581345       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-052675_4d6fd3f0-4271-421b-bde5-7ca41a58c0d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-052675 -n multinode-052675
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-052675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (149.10s)

                                                
                                    

Test pass (288/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.24
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.26.1/json-events 4.75
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.27
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.18
18 TestDownloadOnlyKic 2.92
19 TestBinaryMirror 0.89
20 TestOffline 61.57
22 TestAddons/Setup 101.13
24 TestAddons/parallel/Registry 14.72
25 TestAddons/parallel/Ingress 23.37
26 TestAddons/parallel/MetricsServer 5.59
27 TestAddons/parallel/HelmTiller 10.81
29 TestAddons/parallel/CSI 39.9
30 TestAddons/parallel/Headlamp 10.02
31 TestAddons/parallel/CloudSpanner 5.52
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 11.12
36 TestCertOptions 34.46
37 TestCertExpiration 247.29
38 TestDockerFlags 36.35
39 TestForceSystemdFlag 45.43
40 TestForceSystemdEnv 38.65
41 TestKVMDriverInstallOrUpdate 2.04
45 TestErrorSpam/setup 26.09
46 TestErrorSpam/start 0.98
47 TestErrorSpam/status 1.13
48 TestErrorSpam/pause 1.45
49 TestErrorSpam/unpause 1.44
50 TestErrorSpam/stop 11.03
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 43.14
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 44.83
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.1
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.62
62 TestFunctional/serial/CacheCmd/cache/add_local 0.75
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.8
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.13
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
70 TestFunctional/serial/ExtraConfig 43.58
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.19
73 TestFunctional/serial/LogsFileCmd 1.2
75 TestFunctional/parallel/ConfigCmd 0.52
76 TestFunctional/parallel/DashboardCmd 13.76
77 TestFunctional/parallel/DryRun 0.66
78 TestFunctional/parallel/InternationalLanguage 0.3
79 TestFunctional/parallel/StatusCmd 1.26
82 TestFunctional/parallel/ServiceCmd 11.15
83 TestFunctional/parallel/ServiceCmdConnect 10.84
84 TestFunctional/parallel/AddonsCmd 0.21
85 TestFunctional/parallel/PersistentVolumeClaim 28.63
87 TestFunctional/parallel/SSHCmd 0.94
88 TestFunctional/parallel/CpCmd 1.85
89 TestFunctional/parallel/MySQL 27.33
90 TestFunctional/parallel/FileSync 0.45
91 TestFunctional/parallel/CertSync 2.6
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
99 TestFunctional/parallel/License 0.16
100 TestFunctional/parallel/Version/short 0.09
101 TestFunctional/parallel/Version/components 0.92
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
106 TestFunctional/parallel/ImageCommands/ImageBuild 2.9
107 TestFunctional/parallel/ImageCommands/Setup 0.95
108 TestFunctional/parallel/DockerEnv/bash 1.5
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.19
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.34
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.18
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.83
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.91
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.25
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.66
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
130 TestFunctional/parallel/ProfileCmd/profile_list 0.43
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
132 TestFunctional/parallel/MountCmd/any-port 8.93
133 TestFunctional/parallel/MountCmd/specific-port 2.28
134 TestFunctional/delete_addon-resizer_images 0.08
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
140 TestImageBuild/serial/NormalBuild 0.92
141 TestImageBuild/serial/BuildWithBuildArg 1.05
142 TestImageBuild/serial/BuildWithDockerIgnore 0.43
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.34
146 TestIngressAddonLegacy/StartLegacyK8sCluster 53.72
148 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.22
149 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.4
150 TestIngressAddonLegacy/serial/ValidateIngressAddons 40.26
153 TestJSONOutput/start/Command 50.85
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.59
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.55
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 5.81
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.3
178 TestKicCustomNetwork/create_custom_network 28.79
179 TestKicCustomNetwork/use_default_bridge_network 27.92
180 TestKicExistingNetwork 29
181 TestKicCustomSubnet 27.46
182 TestKicStaticIP 28
183 TestMainNoArgs 0.07
184 TestMinikubeProfile 56.99
187 TestMountStart/serial/StartWithMountFirst 6.34
188 TestMountStart/serial/VerifyMountFirst 0.33
189 TestMountStart/serial/StartWithMountSecond 6.17
190 TestMountStart/serial/VerifyMountSecond 0.33
191 TestMountStart/serial/DeleteFirst 1.58
192 TestMountStart/serial/VerifyMountPostDelete 0.33
193 TestMountStart/serial/Stop 1.25
194 TestMountStart/serial/RestartStopped 7.41
195 TestMountStart/serial/VerifyMountPostStop 0.33
198 TestMultiNode/serial/FreshStart2Nodes 67.81
199 TestMultiNode/serial/DeployApp2Nodes 6.88
200 TestMultiNode/serial/PingHostFrom2Pods 0.97
201 TestMultiNode/serial/AddNode 19.49
202 TestMultiNode/serial/ProfileList 0.38
203 TestMultiNode/serial/CopyFile 12.09
204 TestMultiNode/serial/StopNode 2.46
206 TestMultiNode/serial/RestartKeepsNodes 90.8
207 TestMultiNode/serial/DeleteNode 5.05
208 TestMultiNode/serial/StopMultiNode 21.79
209 TestMultiNode/serial/RestartMultiNode 52.94
210 TestMultiNode/serial/ValidateNameConflict 28.27
215 TestPreload 148.05
217 TestScheduledStopUnix 101.8
218 TestSkaffold 55.96
220 TestInsufficientStorage 11.26
221 TestRunningBinaryUpgrade 61.1
223 TestKubernetesUpgrade 122.14
224 TestMissingContainerUpgrade 115.44
236 TestStoppedBinaryUpgrade/Setup 0.36
237 TestStoppedBinaryUpgrade/Upgrade 82.48
239 TestPause/serial/Start 49.59
240 TestPause/serial/SecondStartNoReconfiguration 44.33
241 TestStoppedBinaryUpgrade/MinikubeLogs 1.47
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
251 TestNoKubernetes/serial/StartWithK8s 31.34
252 TestNetworkPlugins/group/auto/Start 48.62
253 TestNoKubernetes/serial/StartWithStopK8s 15.95
254 TestPause/serial/Pause 0.68
255 TestPause/serial/VerifyStatus 0.39
256 TestPause/serial/Unpause 0.62
257 TestPause/serial/PauseAgain 0.88
258 TestPause/serial/DeletePaused 2.36
259 TestPause/serial/VerifyDeletedResources 31.76
260 TestNoKubernetes/serial/Start 5.45
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
262 TestNoKubernetes/serial/ProfileList 20.82
263 TestNetworkPlugins/group/auto/KubeletFlags 0.36
264 TestNetworkPlugins/group/auto/NetCatPod 9.21
265 TestNetworkPlugins/group/kindnet/Start 49.34
266 TestNoKubernetes/serial/Stop 1.33
267 TestNetworkPlugins/group/auto/DNS 0.21
268 TestNetworkPlugins/group/auto/Localhost 0.2
269 TestNetworkPlugins/group/auto/HairPin 0.17
270 TestNoKubernetes/serial/StartNoArgs 7.26
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.44
272 TestNetworkPlugins/group/calico/Start 67.58
273 TestNetworkPlugins/group/custom-flannel/Start 49.28
274 TestNetworkPlugins/group/false/Start 49.53
275 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
276 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
277 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
278 TestNetworkPlugins/group/kindnet/DNS 0.21
279 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.53
280 TestNetworkPlugins/group/kindnet/Localhost 0.2
281 TestNetworkPlugins/group/kindnet/HairPin 0.19
282 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.36
283 TestNetworkPlugins/group/false/KubeletFlags 0.4
284 TestNetworkPlugins/group/false/NetCatPod 9.26
285 TestNetworkPlugins/group/custom-flannel/DNS 0.21
286 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
287 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
288 TestNetworkPlugins/group/calico/ControllerPod 5.02
289 TestNetworkPlugins/group/calico/KubeletFlags 0.41
290 TestNetworkPlugins/group/calico/NetCatPod 11.42
291 TestNetworkPlugins/group/false/DNS 0.22
292 TestNetworkPlugins/group/false/Localhost 0.23
293 TestNetworkPlugins/group/false/HairPin 0.21
294 TestNetworkPlugins/group/flannel/Start 55.56
295 TestNetworkPlugins/group/calico/DNS 0.2
296 TestNetworkPlugins/group/calico/Localhost 0.2
297 TestNetworkPlugins/group/calico/HairPin 0.19
298 TestNetworkPlugins/group/bridge/Start 59.78
299 TestNetworkPlugins/group/enable-default-cni/Start 50.8
300 TestNetworkPlugins/group/kubenet/Start 44.78
301 TestNetworkPlugins/group/flannel/ControllerPod 5.01
302 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
303 TestNetworkPlugins/group/flannel/NetCatPod 10.25
304 TestNetworkPlugins/group/flannel/DNS 0.18
305 TestNetworkPlugins/group/flannel/Localhost 0.17
306 TestNetworkPlugins/group/flannel/HairPin 0.16
307 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
308 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
309 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
310 TestNetworkPlugins/group/bridge/NetCatPod 11.27
311 TestNetworkPlugins/group/kubenet/KubeletFlags 0.52
312 TestNetworkPlugins/group/kubenet/NetCatPod 10.35
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
314 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
316 TestNetworkPlugins/group/bridge/DNS 0.18
317 TestNetworkPlugins/group/bridge/Localhost 0.15
318 TestNetworkPlugins/group/bridge/HairPin 0.14
319 TestNetworkPlugins/group/kubenet/DNS 0.17
320 TestNetworkPlugins/group/kubenet/Localhost 0.15
321 TestNetworkPlugins/group/kubenet/HairPin 0.16
323 TestStartStop/group/old-k8s-version/serial/FirstStart 123.89
325 TestStartStop/group/embed-certs/serial/FirstStart 47.8
327 TestStartStop/group/no-preload/serial/FirstStart 55.64
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.21
330 TestStartStop/group/embed-certs/serial/DeployApp 8.34
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.34
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
333 TestStartStop/group/no-preload/serial/DeployApp 7.33
334 TestStartStop/group/embed-certs/serial/Stop 10.76
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.67
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.96
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
338 TestStartStop/group/no-preload/serial/Stop 11.04
339 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/embed-certs/serial/SecondStart 560.86
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
342 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 572.94
343 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
344 TestStartStop/group/no-preload/serial/SecondStart 559.1
345 TestStartStop/group/old-k8s-version/serial/DeployApp 8.36
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.64
347 TestStartStop/group/old-k8s-version/serial/Stop 10.87
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
349 TestStartStop/group/old-k8s-version/serial/SecondStart 66.15
350 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
351 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
352 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
353 TestStartStop/group/old-k8s-version/serial/Pause 3.1
355 TestStartStop/group/newest-cni/serial/FirstStart 42.9
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.72
358 TestStartStop/group/newest-cni/serial/Stop 10.91
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
360 TestStartStop/group/newest-cni/serial/SecondStart 28.01
361 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
364 TestStartStop/group/newest-cni/serial/Pause 3.14
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
366 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
368 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
369 TestStartStop/group/embed-certs/serial/Pause 3.1
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
371 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
372 TestStartStop/group/no-preload/serial/Pause 3.02
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
x
+
TestDownloadOnly/v1.16.0/json-events (6.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-665208 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-665208 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.234896634s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-665208
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-665208: exit status 85 (91.048798ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-665208 | jenkins | v1.29.0 | 28 Jan 23 18:21 UTC |          |
	|         | -p download-only-665208        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 18:21:36
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 18:21:36.318487   10365 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:21:36.318622   10365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:21:36.318631   10365 out.go:309] Setting ErrFile to fd 2...
	I0128 18:21:36.318635   10365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:21:36.318736   10365 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	W0128 18:21:36.318853   10365 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3259/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3259/.minikube/config/config.json: no such file or directory
	I0128 18:21:36.319398   10365 out.go:303] Setting JSON to true
	I0128 18:21:36.320184   10365 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":249,"bootTime":1674929848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 18:21:36.320250   10365 start.go:135] virtualization: kvm guest
	I0128 18:21:36.323534   10365 out.go:97] [download-only-665208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 18:21:36.323664   10365 notify.go:220] Checking for updates...
	W0128 18:21:36.323667   10365 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball: no such file or directory
	I0128 18:21:36.325664   10365 out.go:169] MINIKUBE_LOCATION=15565
	I0128 18:21:36.327714   10365 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 18:21:36.329798   10365 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:21:36.331992   10365 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	I0128 18:21:36.334396   10365 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0128 18:21:36.338823   10365 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0128 18:21:36.339183   10365 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 18:21:36.368743   10365 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0128 18:21:36.368826   10365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:21:37.276456   10365 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-28 18:21:36.386543191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:21:37.276570   10365 docker.go:282] overlay module found
	I0128 18:21:37.279150   10365 out.go:97] Using the docker driver based on user configuration
	I0128 18:21:37.279172   10365 start.go:296] selected driver: docker
	I0128 18:21:37.279182   10365 start.go:857] validating driver "docker" against <nil>
	I0128 18:21:37.279271   10365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:21:37.384725   10365 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-28 18:21:37.297611034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:21:37.384843   10365 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 18:21:37.385292   10365 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0128 18:21:37.385446   10365 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 18:21:37.387917   10365 out.go:169] Using Docker driver with root privileges
	I0128 18:21:37.389715   10365 cni.go:84] Creating CNI manager for ""
	I0128 18:21:37.389764   10365 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 18:21:37.389779   10365 start_flags.go:319] config:
	{Name:download-only-665208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-665208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 18:21:37.391871   10365 out.go:97] Starting control plane node download-only-665208 in cluster download-only-665208
	I0128 18:21:37.391899   10365 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 18:21:37.393845   10365 out.go:97] Pulling base image ...
	I0128 18:21:37.393894   10365 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 18:21:37.394022   10365 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 18:21:37.414782   10365 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 18:21:37.414939   10365 image.go:61] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory
	I0128 18:21:37.415018   10365 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 18:21:37.417942   10365 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 18:21:37.417964   10365 cache.go:57] Caching tarball of preloaded images
	I0128 18:21:37.418075   10365 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 18:21:37.420868   10365 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0128 18:21:37.420885   10365 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 18:21:37.447435   10365 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 18:21:39.952355   10365 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 18:21:39.952437   10365 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 18:21:40.689920   10365 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 18:21:40.690287   10365 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/download-only-665208/config.json ...
	I0128 18:21:40.690327   10365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/download-only-665208/config.json: {Name:mk486658b2baec615eab26253db22a93502b0468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 18:21:40.690478   10365 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 18:21:40.690657   10365 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15565-3259/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-665208"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (4.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-665208 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-665208 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.747610717s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (4.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-665208
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-665208: exit status 85 (89.46427ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-665208 | jenkins | v1.29.0 | 28 Jan 23 18:21 UTC |          |
	|         | -p download-only-665208        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-665208 | jenkins | v1.29.0 | 28 Jan 23 18:21 UTC |          |
	|         | -p download-only-665208        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 18:21:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 18:21:42.641990   10534 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:21:42.642161   10534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:21:42.642169   10534 out.go:309] Setting ErrFile to fd 2...
	I0128 18:21:42.642175   10534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:21:42.642295   10534 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	W0128 18:21:42.642435   10534 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3259/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3259/.minikube/config/config.json: no such file or directory
	I0128 18:21:42.642888   10534 out.go:303] Setting JSON to true
	I0128 18:21:42.643676   10534 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":255,"bootTime":1674929848,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 18:21:42.643738   10534 start.go:135] virtualization: kvm guest
	I0128 18:21:42.646452   10534 out.go:97] [download-only-665208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 18:21:42.646573   10534 notify.go:220] Checking for updates...
	I0128 18:21:42.648545   10534 out.go:169] MINIKUBE_LOCATION=15565
	I0128 18:21:42.650718   10534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 18:21:42.652761   10534 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:21:42.654717   10534 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	I0128 18:21:42.656568   10534 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-665208"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-665208
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.92s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-273243 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-273243 --force --alsologtostderr --driver=docker  --container-runtime=docker: (1.893145212s)
helpers_test.go:175: Cleaning up "download-docker-273243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-273243
--- PASS: TestDownloadOnlyKic (2.92s)

                                                
                                    
x
+
TestBinaryMirror (0.89s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-596371 --alsologtostderr --binary-mirror http://127.0.0.1:39727 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-596371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-596371
--- PASS: TestBinaryMirror (0.89s)

                                                
                                    
x
+
TestOffline (61.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-944588 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-944588 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (59.155320188s)
helpers_test.go:175: Cleaning up "offline-docker-944588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-944588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-944588: (2.408169886s)
--- PASS: TestOffline (61.57s)

                                                
                                    
x
+
TestAddons/Setup (101.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-266049 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-266049 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m41.134636298s)
--- PASS: TestAddons/Setup (101.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 10.6104ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-f7hp4" [4f541cce-64e1-4706-add2-f3a83c155805] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008619399s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gv5d9" [c6c9aff7-9769-4560-bad4-395a68c4a767] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007128093s
addons_test.go:305: (dbg) Run:  kubectl --context addons-266049 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-266049 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-266049 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.850591034s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-266049 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context addons-266049 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.639742409s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-266049 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:197: (dbg) Non-zero exit: kubectl --context addons-266049 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (777.010046ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.98.55.26:443: connect: connection refused

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:197: (dbg) Run:  kubectl --context addons-266049 replace --force -f testdata/nginx-ingress-v1.yaml
2023/01/28 18:23:47 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:210: (dbg) Run:  kubectl --context addons-266049 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6578dad9-5890-47d9-a9e3-993f2d607b35] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [6578dad9-5890-47d9-a9e3-993f2d607b35] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006052377s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-266049 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-266049 addons disable ingress-dns --alsologtostderr -v=1: (1.319405613s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-266049 addons disable ingress --alsologtostderr -v=1: (7.743189425s)
--- PASS: TestAddons/parallel/Ingress (23.37s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.428994ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-mjxqm" [86428214-4cab-449e-b56f-a1ebfa189e4e] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.061867654s
addons_test.go:380: (dbg) Run:  kubectl --context addons-266049 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 1.888978ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-hzcjw" [e26bf111-ff75-4370-b54f-6fcebb041b9e] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009266507s
addons_test.go:438: (dbg) Run:  kubectl --context addons-266049 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-266049 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.452423746s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 11.566572ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-266049 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-266049 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:394: (dbg) Run:  kubectl --context addons-266049 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-266049 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d2dc7e2b-0486-4a5c-9cbb-0ef4c1c36aff] Pending
helpers_test.go:344: "task-pv-pod" [d2dc7e2b-0486-4a5c-9cbb-0ef4c1c36aff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [d2dc7e2b-0486-4a5c-9cbb-0ef4c1c36aff] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.007357964s
addons_test.go:549: (dbg) Run:  kubectl --context addons-266049 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-266049 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:419: (dbg) Run:  kubectl --context addons-266049 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-266049 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-266049 delete pod task-pv-pod: (1.848447446s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-266049 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-266049 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-266049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-266049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-266049 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f6ea2687-0b64-4db6-b3ca-1379972ab72e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [f6ea2687-0b64-4db6-b3ca-1379972ab72e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [f6ea2687-0b64-4db6-b3ca-1379972ab72e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.006105279s
addons_test.go:591: (dbg) Run:  kubectl --context addons-266049 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-266049 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-266049 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-266049 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.117153045s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-266049 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-266049 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-266049 --alsologtostderr -v=1: (1.011764732s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-crcb9" [28c4e05d-8086-4ec2-be76-3d78aa9d137e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-crcb9" [28c4e05d-8086-4ec2-be76-3d78aa9d137e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-crcb9" [28c4e05d-8086-4ec2-be76-3d78aa9d137e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.005826437s
--- PASS: TestAddons/parallel/Headlamp (10.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:344: "cloud-spanner-emulator-769b7f8b64-dbk7f" [54c32722-4487-4196-bd4d-c0d9f9e5e49b] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007058739s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-266049
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-266049 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-266049 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-266049
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-266049: (10.911088874s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-266049
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-266049
--- PASS: TestAddons/StoppedEnableDisable (11.12s)

                                                
                                    
x
+
TestCertOptions (34.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-856446 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-856446 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (29.712438757s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-856446 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-856446 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-856446 -- "sudo cat /etc/kubernetes/admin.conf"
E0128 18:50:11.800944   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-856446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-856446
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-856446: (3.817823904s)
--- PASS: TestCertOptions (34.46s)

                                                
                                    
x
+
TestCertExpiration (247.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-047374 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-047374 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (32.376411149s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-047374 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-047374 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (30.086019122s)
helpers_test.go:175: Cleaning up "cert-expiration-047374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-047374
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-047374: (4.822644801s)
--- PASS: TestCertExpiration (247.29s)

                                                
                                    
x
+
TestDockerFlags (36.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-079152 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-079152 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.301608972s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-079152 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-079152 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-079152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-079152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-079152: (2.312883398s)
--- PASS: TestDockerFlags (36.35s)

                                                
                                    
x
+
TestForceSystemdFlag (45.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-002144 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-002144 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.328725689s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-002144 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-002144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-002144

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-002144: (2.581996685s)
--- PASS: TestForceSystemdFlag (45.43s)

                                                
                                    
x
+
TestForceSystemdEnv (38.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-210328 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0128 18:49:11.118826   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-210328 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.642739437s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-210328 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-210328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-210328

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-210328: (2.500502364s)
--- PASS: TestForceSystemdEnv (38.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.04s)

                                                
                                    
x
+
TestErrorSpam/setup (26.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-797746 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-797746 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-797746 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-797746 --driver=docker  --container-runtime=docker: (26.09071584s)
--- PASS: TestErrorSpam/setup (26.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 start --dry-run
--- PASS: TestErrorSpam/start (0.98s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (11.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 stop: (10.774828049s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-797746 --log_dir /tmp/nospam-797746 stop
--- PASS: TestErrorSpam/stop (11.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/test/nested/copy/10353/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017977 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-017977 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (43.138631635s)
--- PASS: TestFunctional/serial/StartWithProxy (43.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017977 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-017977 --alsologtostderr -v=8: (44.828341352s)
functional_test.go:656: soft start took 44.829195825s for "functional-017977" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-017977 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-017977 /tmp/TestFunctionalserialCacheCmdcacheadd_local864679065/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cache add minikube-local-cache-test:functional-017977
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cache delete minikube-local-cache-test:functional-017977
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-017977
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (351.867855ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cache reload
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 kubectl -- --context functional-017977 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-017977 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017977 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-017977 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.577043599s)
functional_test.go:754: restart took 43.577177033s for "functional-017977" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-017977 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 logs: (1.194430523s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 logs --file /tmp/TestFunctionalserialLogsFileCmd1429410003/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 logs --file /tmp/TestFunctionalserialLogsFileCmd1429410003/001/logs.txt: (1.202528268s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 config get cpus: exit status 14 (74.766973ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 config get cpus: exit status 14 (87.686313ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-017977 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-017977 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 61703: os: process already finished
E0128 18:28:33.308765   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (13.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-017977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (279.037039ms)

                                                
                                                
-- stdout --
	* [functional-017977] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 18:28:19.494517   60481 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:28:19.494773   60481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:28:19.494786   60481 out.go:309] Setting ErrFile to fd 2...
	I0128 18:28:19.494795   60481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:28:19.494957   60481 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	I0128 18:28:19.495636   60481 out.go:303] Setting JSON to false
	I0128 18:28:19.497156   60481 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":652,"bootTime":1674929848,"procs":625,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 18:28:19.497245   60481 start.go:135] virtualization: kvm guest
	I0128 18:28:19.503485   60481 out.go:177] * [functional-017977] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 18:28:19.505573   60481 notify.go:220] Checking for updates...
	I0128 18:28:19.507540   60481 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 18:28:19.509673   60481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 18:28:19.511464   60481 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:28:19.513140   60481 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	I0128 18:28:19.514609   60481 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 18:28:19.516220   60481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 18:28:19.517977   60481 config.go:180] Loaded profile config "functional-017977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:28:19.518354   60481 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 18:28:19.553221   60481 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0128 18:28:19.553317   60481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:28:19.681280   60481 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-28 18:28:19.583121434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:28:19.681414   60481 docker.go:282] overlay module found
	I0128 18:28:19.684313   60481 out.go:177] * Using the docker driver based on existing profile
	I0128 18:28:19.687206   60481 start.go:296] selected driver: docker
	I0128 18:28:19.687230   60481 start.go:857] validating driver "docker" against &{Name:functional-017977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-017977 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 18:28:19.687317   60481 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 18:28:19.690157   60481 out.go:177] 
	W0128 18:28:19.691810   60481 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0128 18:28:19.693307   60481 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017977 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-017977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-017977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (302.503206ms)

                                                
                                                
-- stdout --
	* [functional-017977] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 18:28:19.387434   60418 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:28:19.387615   60418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:28:19.387624   60418 out.go:309] Setting ErrFile to fd 2...
	I0128 18:28:19.387628   60418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:28:19.387816   60418 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	I0128 18:28:19.388413   60418 out.go:303] Setting JSON to false
	I0128 18:28:19.406171   60418 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":652,"bootTime":1674929848,"procs":621,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 18:28:19.406271   60418 start.go:135] virtualization: kvm guest
	I0128 18:28:19.410138   60418 out.go:177] * [functional-017977] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0128 18:28:19.412835   60418 notify.go:220] Checking for updates...
	I0128 18:28:19.414856   60418 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 18:28:19.420522   60418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 18:28:19.422893   60418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	I0128 18:28:19.426863   60418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	I0128 18:28:19.428576   60418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 18:28:19.430177   60418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 18:28:19.432198   60418 config.go:180] Loaded profile config "functional-017977": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:28:19.432758   60418 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 18:28:19.463970   60418 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0128 18:28:19.464067   60418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:28:19.582778   60418 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-28 18:28:19.48651918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:28:19.582921   60418 docker.go:282] overlay module found
	I0128 18:28:19.585759   60418 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0128 18:28:19.587694   60418 start.go:296] selected driver: docker
	I0128 18:28:19.587725   60418 start.go:857] validating driver "docker" against &{Name:functional-017977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-017977 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 18:28:19.587867   60418 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 18:28:19.590829   60418 out.go:177] 
	W0128 18:28:19.592962   60418 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0128 18:28:19.595230   60418 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-017977 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-017977 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-c75cz" [a3df846a-fe7f-4036-b1ad-a3341cff18f8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-c75cz" [a3df846a-fe7f-4036-b1ad-a3341cff18f8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.006628008s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 service list: (1.007423658s)
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.49.2:31566
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:31566
--- PASS: TestFunctional/parallel/ServiceCmd (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-017977 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1565: (dbg) Run:  kubectl --context functional-017977 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-brcw4" [2d02d460-83d3-4a59-8b67-1b45a21c5ab8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:344: "hello-node-connect-5cf7cc858f-brcw4" [2d02d460-83d3-4a59-8b67-1b45a21c5ab8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006781137s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:30911
functional_test.go:1605: http://192.168.49.2:30911: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-brcw4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30911
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "storage-provisioner" [259a1e72-55fc-44cc-a339-0512aa6196f0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007564365s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-017977 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-017977 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-017977 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-017977 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-017977 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4fe13aee-587b-48f1-9a7a-ba5859f454cf] Pending
helpers_test.go:344: "sp-pod" [4fe13aee-587b-48f1-9a7a-ba5859f454cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [4fe13aee-587b-48f1-9a7a-ba5859f454cf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.054744741s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-017977 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-017977 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-017977 delete -f testdata/storage-provisioner/pod.yaml: (1.198721083s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-017977 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [db8ab5a5-1a30-45fd-9f56-2ef8ccbea9ce] Pending
helpers_test.go:344: "sp-pod" [db8ab5a5-1a30-45fd-9f56-2ef8ccbea9ce] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008427192s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-017977 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh -n functional-017977 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 cp functional-017977:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2998328872/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh -n functional-017977 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-017977 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-qhhrm" [e3c2bf4b-bd2e-46b8-b813-a83f25ad7c29] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-qhhrm" [e3c2bf4b-bd2e-46b8-b813-a83f25ad7c29] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.013249123s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;": exit status 1 (155.741229ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;": exit status 1 (183.171445ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;": exit status 1 (175.314188ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;": exit status 1 (268.7519ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-017977 exec mysql-888f84dd9-qhhrm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10353/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo cat /etc/test/nested/copy/10353/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10353.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo cat /etc/ssl/certs/10353.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10353.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo cat /usr/share/ca-certificates/10353.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/103532.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo cat /etc/ssl/certs/103532.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/103532.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo cat /usr/share/ca-certificates/103532.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-017977 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 ssh "sudo systemctl is-active crio": exit status 1 (386.29439ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017977 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-017977
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-017977
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017977 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/mysql                     | 5.7               | 9ec14ca3fec4d | 455MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-017977 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-017977 | e4ded956d6ea0 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017977 image ls --format json:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags"
:["gcr.io/google-containers/addon-resizer:functional-017977"],"size":"32900000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"e4ded956d6ea06
7b753160db2ca9f47136c80e2c4fba948290d61374008e5ab6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-017977"],"size":"30"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017977 image ls --format yaml:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-017977
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: e4ded956d6ea067b753160db2ca9f47136c80e2c4fba948290d61374008e5ab6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-017977
size: "30"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 ssh pgrep buildkitd: exit status 1 (410.471031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image build -t localhost/my-image:functional-017977 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 image build -t localhost/my-image:functional-017977 testdata/build: (2.21833623s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-017977 image build -t localhost/my-image:functional-017977 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in a3fb0d033533
Removing intermediate container a3fb0d033533
---> 8f9c76bca49e
Step 3/3 : ADD content.txt /
---> ecbeb0407adb
Successfully built ecbeb0407adb
Successfully tagged localhost/my-image:functional-017977
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-017977
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-017977 docker-env) && out/minikube-linux-amd64 status -p functional-017977"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-017977 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image load --daemon gcr.io/google-containers/addon-resizer:functional-017977

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 image load --daemon gcr.io/google-containers/addon-resizer:functional-017977: (4.896080079s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-017977 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-017977 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [aca01808-f8ee-497d-8208-919d5233dbad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [aca01808-f8ee-497d-8208-919d5233dbad] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.007994929s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image load --daemon gcr.io/google-containers/addon-resizer:functional-017977

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 image load --daemon gcr.io/google-containers/addon-resizer:functional-017977: (2.812717233s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-017977
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image load --daemon gcr.io/google-containers/addon-resizer:functional-017977

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 image load --daemon gcr.io/google-containers/addon-resizer:functional-017977: (4.684159863s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image save gcr.io/google-containers/addon-resizer:functional-017977 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 image save gcr.io/google-containers/addon-resizer:functional-017977 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.909380491s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image rm gcr.io/google-containers/addon-resizer:functional-017977

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-017977
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 image save --daemon gcr.io/google-containers/addon-resizer:functional-017977

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-017977 image save --daemon gcr.io/google-containers/addon-resizer:functional-017977: (2.600263021s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-017977
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-017977 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.111.22.8 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-017977 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "358.505018ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "71.978891ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "450.594597ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "74.690968ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-017977 /tmp/TestFunctionalparallelMountCmdany-port3439329774/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1674930496571985238" to /tmp/TestFunctionalparallelMountCmdany-port3439329774/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674930496571985238" to /tmp/TestFunctionalparallelMountCmdany-port3439329774/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674930496571985238" to /tmp/TestFunctionalparallelMountCmdany-port3439329774/001/test-1674930496571985238
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.384657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 28 18:28 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 28 18:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 28 18:28 test-1674930496571985238
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh cat /mount-9p/test-1674930496571985238

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-017977 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [53f05fb8-d227-4c3f-8dff-159b998907d8] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [53f05fb8-d227-4c3f-8dff-159b998907d8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [53f05fb8-d227-4c3f-8dff-159b998907d8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [53f05fb8-d227-4c3f-8dff-159b998907d8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.011079793s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-017977 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017977 /tmp/TestFunctionalparallelMountCmdany-port3439329774/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-017977 /tmp/TestFunctionalparallelMountCmdspecific-port222588749/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (467.555306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017977 /tmp/TestFunctionalparallelMountCmdspecific-port222588749/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-017977 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-017977 ssh "sudo umount -f /mount-9p": exit status 1 (409.77365ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-017977 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-017977 /tmp/TestFunctionalparallelMountCmdspecific-port222588749/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E0128 18:28:32.990526   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 18:28:32.996244   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 18:28:33.006565   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 18:28:33.026868   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 18:28:33.067219   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
2023/01/28 18:28:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
E0128 18:28:33.147677   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-017977
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-017977
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-017977
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-723771
--- PASS: TestImageBuild/serial/NormalBuild (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-723771
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-723771: (1.046175039s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-723771
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-723771
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (53.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-067754 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0128 18:29:13.954587   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 18:29:54.915682   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-067754 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (53.718849341s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (53.72s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067754 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-067754 addons enable ingress --alsologtostderr -v=5: (10.217780644s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067754 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (40.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-067754 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-067754 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.913323163s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-067754 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-067754 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [13fa90a8-be35-49a2-94e1-40fdec2bbe7f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [13fa90a8-be35-49a2-94e1-40fdec2bbe7f] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.005711096s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067754 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-067754 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067754 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067754 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-067754 addons disable ingress-dns --alsologtostderr -v=1: (8.770854124s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-067754 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-067754 addons disable ingress --alsologtostderr -v=1: (7.26731586s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (40.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-896502 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0128 18:31:16.836079   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-896502 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (50.849217639s)
--- PASS: TestJSONOutput/start/Command (50.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-896502 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-896502 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-896502 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-896502 --output=json --user=testUser: (5.810085828s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.3s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-875722 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-875722 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.845303ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c8dc7a28-1557-40e2-9da3-a2c650386510","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-875722] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d44f5dfa-ef7b-4f3d-b020-02d300116d03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"fb92be1d-a285-4df7-8b42-7f7da4623659","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8c6b8334-6111-4e4c-9fcb-c86190f04ba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig"}}
	{"specversion":"1.0","id":"e6442bde-d493-438d-9411-bc612e7054db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube"}}
	{"specversion":"1.0","id":"14e66230-acf0-4875-a9a9-3951bf9b50bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d03d6735-716a-4575-b786-7f3ac7f8fee4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e74a77c-cdf6-4333-b4e9-4d3d355c4274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-875722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-875722
--- PASS: TestErrorJSONOutput (0.30s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-069183 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-069183 --network=: (26.693280134s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-069183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-069183
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-069183: (2.075604801s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-108908 --network=bridge
E0128 18:32:48.070771   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:48.076048   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:48.086346   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:48.106610   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:48.146945   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:48.227269   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:48.387676   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:48.707807   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-108908 --network=bridge: (25.918657799s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-108908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-108908
E0128 18:32:49.348191   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:50.628560   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-108908: (1.97744797s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.92s)

                                                
                                    
x
+
TestKicExistingNetwork (29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-648483 --network=existing-network
E0128 18:32:53.189316   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:32:58.310322   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:33:08.551411   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-648483 --network=existing-network: (26.746415466s)
helpers_test.go:175: Cleaning up "existing-network-648483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-648483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-648483: (2.082440298s)
--- PASS: TestKicExistingNetwork (29.00s)

                                                
                                    
x
+
TestKicCustomSubnet (27.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-483729 --subnet=192.168.60.0/24
E0128 18:33:29.032430   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:33:32.990984   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-483729 --subnet=192.168.60.0/24: (25.295083078s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-483729 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-483729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-483729
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-483729: (2.142652405s)
--- PASS: TestKicCustomSubnet (27.46s)

                                                
                                    
x
+
TestKicStaticIP (28s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-660960 --static-ip=192.168.200.200
E0128 18:34:00.677388   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 18:34:09.993689   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-660960 --static-ip=192.168.200.200: (25.690889861s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-660960 ip
helpers_test.go:175: Cleaning up "static-ip-660960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-660960
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-660960: (2.114857537s)
--- PASS: TestKicStaticIP (28.00s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (56.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-227972 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-227972 --driver=docker  --container-runtime=docker: (25.359203094s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-231062 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-231062 --driver=docker  --container-runtime=docker: (26.393036238s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-227972
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-231062
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-231062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-231062
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-231062: (1.818722889s)
helpers_test.go:175: Cleaning up "first-227972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-227972
E0128 18:35:11.801620   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:11.806891   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:11.817207   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:11.837565   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:11.877942   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:11.958276   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:12.118721   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:12.439342   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-227972: (2.145623566s)
--- PASS: TestMinikubeProfile (56.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-343233 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0128 18:35:13.079865   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:14.360599   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:35:16.921651   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-343233 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.341510009s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-343233 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-356365 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0128 18:35:22.041891   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-356365 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.171521211s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-356365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-343233 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-343233 --alsologtostderr -v=5: (1.582697489s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-356365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-356365
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-356365: (1.247590308s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-356365
E0128 18:35:31.914795   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:35:32.282263   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-356365: (6.41327364s)
--- PASS: TestMountStart/serial/RestartStopped (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-356365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052675 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0128 18:35:52.763272   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:36:33.724116   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052675 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m7.240785619s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-052675 -- rollout status deployment/busybox: (5.06248572s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g4wvp -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g84sq -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g4wvp -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g84sq -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g4wvp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g84sq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g4wvp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g4wvp -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g84sq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-052675 -- exec busybox-6b86dd6d48-g84sq -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-052675 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-052675 -v 3 --alsologtostderr: (18.720914432s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.49s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp testdata/cp-test.txt multinode-052675:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1635582165/001/cp-test_multinode-052675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675:/home/docker/cp-test.txt multinode-052675-m02:/home/docker/cp-test_multinode-052675_multinode-052675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m02 "sudo cat /home/docker/cp-test_multinode-052675_multinode-052675-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675:/home/docker/cp-test.txt multinode-052675-m03:/home/docker/cp-test_multinode-052675_multinode-052675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m03 "sudo cat /home/docker/cp-test_multinode-052675_multinode-052675-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp testdata/cp-test.txt multinode-052675-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1635582165/001/cp-test_multinode-052675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt multinode-052675:/home/docker/cp-test_multinode-052675-m02_multinode-052675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675 "sudo cat /home/docker/cp-test_multinode-052675-m02_multinode-052675.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt multinode-052675-m03:/home/docker/cp-test_multinode-052675-m02_multinode-052675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m03 "sudo cat /home/docker/cp-test_multinode-052675-m02_multinode-052675-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp testdata/cp-test.txt multinode-052675-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1635582165/001/cp-test_multinode-052675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt multinode-052675:/home/docker/cp-test_multinode-052675-m03_multinode-052675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675 "sudo cat /home/docker/cp-test_multinode-052675-m03_multinode-052675.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt multinode-052675-m02:/home/docker/cp-test_multinode-052675-m03_multinode-052675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 ssh -n multinode-052675-m02 "sudo cat /home/docker/cp-test_multinode-052675-m03_multinode-052675-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-052675 node stop m03: (1.282835149s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052675 status: exit status 7 (587.654762ms)

                                                
                                                
-- stdout --
	multinode-052675
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-052675-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-052675-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr: exit status 7 (587.579432ms)

                                                
                                                
-- stdout --
	multinode-052675
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-052675-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-052675-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 18:37:28.144211  140183 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:37:28.144636  140183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:37:28.144656  140183 out.go:309] Setting ErrFile to fd 2...
	I0128 18:37:28.144664  140183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:37:28.144942  140183 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	I0128 18:37:28.145230  140183 out.go:303] Setting JSON to false
	I0128 18:37:28.145259  140183 mustload.go:65] Loading cluster: multinode-052675
	I0128 18:37:28.145787  140183 notify.go:220] Checking for updates...
	I0128 18:37:28.146423  140183 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:37:28.146446  140183 status.go:255] checking status of multinode-052675 ...
	I0128 18:37:28.146871  140183 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:37:28.172151  140183 status.go:330] multinode-052675 host status = "Running" (err=<nil>)
	I0128 18:37:28.172179  140183 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:37:28.172416  140183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675
	I0128 18:37:28.197180  140183 host.go:66] Checking if "multinode-052675" exists ...
	I0128 18:37:28.197448  140183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 18:37:28.197490  140183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
	I0128 18:37:28.222485  140183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
	I0128 18:37:28.317351  140183 ssh_runner.go:195] Run: systemctl --version
	I0128 18:37:28.321116  140183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 18:37:28.330036  140183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 18:37:28.430909  140183 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-28 18:37:28.350713433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 18:37:28.431719  140183 kubeconfig.go:92] found "multinode-052675" server: "https://192.168.58.2:8443"
	I0128 18:37:28.431744  140183 api_server.go:165] Checking apiserver status ...
	I0128 18:37:28.431776  140183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 18:37:28.441376  140183 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2103/cgroup
	I0128 18:37:28.450174  140183 api_server.go:181] apiserver freezer: "11:freezer:/docker/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/kubepods/burstable/pod67b267479ac4834e2613b5155d6d00dd/a377326949167634cdac3ebfbae2e9fbd7106337b343d10f5bd76d1db5bf547d"
	I0128 18:37:28.450232  140183 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/kubepods/burstable/pod67b267479ac4834e2613b5155d6d00dd/a377326949167634cdac3ebfbae2e9fbd7106337b343d10f5bd76d1db5bf547d/freezer.state
	I0128 18:37:28.456992  140183 api_server.go:203] freezer state: "THAWED"
	I0128 18:37:28.457018  140183 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0128 18:37:28.460465  140183 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0128 18:37:28.460487  140183 status.go:421] multinode-052675 apiserver status = Running (err=<nil>)
	I0128 18:37:28.460496  140183 status.go:257] multinode-052675 status: &{Name:multinode-052675 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 18:37:28.460514  140183 status.go:255] checking status of multinode-052675-m02 ...
	I0128 18:37:28.460740  140183 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
	I0128 18:37:28.484136  140183 status.go:330] multinode-052675-m02 host status = "Running" (err=<nil>)
	I0128 18:37:28.484160  140183 host.go:66] Checking if "multinode-052675-m02" exists ...
	I0128 18:37:28.484386  140183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m02
	I0128 18:37:28.508453  140183 host.go:66] Checking if "multinode-052675-m02" exists ...
	I0128 18:37:28.508690  140183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 18:37:28.508733  140183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
	I0128 18:37:28.534103  140183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
	I0128 18:37:28.625476  140183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 18:37:28.635602  140183 status.go:257] multinode-052675-m02 status: &{Name:multinode-052675-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0128 18:37:28.635643  140183 status.go:255] checking status of multinode-052675-m03 ...
	I0128 18:37:28.635882  140183 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
	I0128 18:37:28.661124  140183 status.go:330] multinode-052675-m03 host status = "Stopped" (err=<nil>)
	I0128 18:37:28.661166  140183 status.go:343] host is not running, skipping remaining checks
	I0128 18:37:28.661176  140183 status.go:257] multinode-052675-m03 status: &{Name:multinode-052675-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (90.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-052675
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-052675
E0128 18:40:11.803023   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-052675: (22.659942653s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052675 --wait=true -v=8 --alsologtostderr
E0128 18:40:39.486947   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052675 --wait=true -v=8 --alsologtostderr: (1m8.001659689s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-052675
--- PASS: TestMultiNode/serial/RestartKeepsNodes (90.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-052675 node delete m03: (4.348326347s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-052675 stop: (21.552821009s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052675 status: exit status 7 (117.993714ms)

                                                
                                                
-- stdout --
	multinode-052675
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-052675-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr: exit status 7 (119.79455ms)

                                                
                                                
-- stdout --
	multinode-052675
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-052675-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 18:41:55.351131  161762 out.go:296] Setting OutFile to fd 1 ...
	I0128 18:41:55.351334  161762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:41:55.351343  161762 out.go:309] Setting ErrFile to fd 2...
	I0128 18:41:55.351348  161762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 18:41:55.351483  161762 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
	I0128 18:41:55.351662  161762 out.go:303] Setting JSON to false
	I0128 18:41:55.351692  161762 mustload.go:65] Loading cluster: multinode-052675
	I0128 18:41:55.351799  161762 notify.go:220] Checking for updates...
	I0128 18:41:55.352024  161762 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 18:41:55.352044  161762 status.go:255] checking status of multinode-052675 ...
	I0128 18:41:55.352424  161762 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
	I0128 18:41:55.379341  161762 status.go:330] multinode-052675 host status = "Stopped" (err=<nil>)
	I0128 18:41:55.379363  161762 status.go:343] host is not running, skipping remaining checks
	I0128 18:41:55.379369  161762 status.go:257] multinode-052675 status: &{Name:multinode-052675 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 18:41:55.379397  161762 status.go:255] checking status of multinode-052675-m02 ...
	I0128 18:41:55.379628  161762 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
	I0128 18:41:55.401500  161762 status.go:330] multinode-052675-m02 host status = "Stopped" (err=<nil>)
	I0128 18:41:55.401525  161762 status.go:343] host is not running, skipping remaining checks
	I0128 18:41:55.401531  161762 status.go:257] multinode-052675-m02 status: &{Name:multinode-052675-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052675 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052675 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (52.211785428s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-052675 status --alsologtostderr
E0128 18:42:48.070358   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-052675
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052675-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-052675-m02 --driver=docker  --container-runtime=docker: exit status 14 (100.574639ms)

                                                
                                                
-- stdout --
	* [multinode-052675-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-052675-m02' is duplicated with machine name 'multinode-052675-m02' in profile 'multinode-052675'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-052675-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-052675-m03 --driver=docker  --container-runtime=docker: (25.580789103s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-052675
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-052675: exit status 80 (368.884278ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-052675
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-052675-m03 already exists in multinode-052675-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-052675-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-052675-m03: (2.144508855s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.27s)

                                                
                                    
x
+
TestPreload (148.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-826207 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0128 18:43:32.991034   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-826207 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m29.611531347s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-826207 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-826207
E0128 18:44:56.040272   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-826207: (10.751742639s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-826207 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0128 18:45:11.802460   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-826207 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (44.237333935s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-826207 -- docker images
helpers_test.go:175: Cleaning up "test-preload-826207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-826207
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-826207: (2.244556773s)
--- PASS: TestPreload (148.05s)

                                                
                                    
x
+
TestScheduledStopUnix (101.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-946458 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-946458 --memory=2048 --driver=docker  --container-runtime=docker: (28.311791933s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946458 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-946458 -n scheduled-stop-946458
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946458 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946458 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-946458 -n scheduled-stop-946458
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-946458
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946458 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-946458
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-946458: exit status 7 (99.588025ms)

                                                
                                                
-- stdout --
	scheduled-stop-946458
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-946458 -n scheduled-stop-946458
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-946458 -n scheduled-stop-946458: exit status 7 (92.67684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-946458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-946458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-946458: (1.697385097s)
--- PASS: TestScheduledStopUnix (101.80s)

                                                
                                    
x
+
TestSkaffold (55.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3660783270 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-244312 --memory=2600 --driver=docker  --container-runtime=docker
E0128 18:47:48.070167   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-244312 --memory=2600 --driver=docker  --container-runtime=docker: (27.398738331s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3660783270 run --minikube-profile skaffold-244312 --kube-context skaffold-244312 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3660783270 run --minikube-profile skaffold-244312 --kube-context skaffold-244312 --status-check=true --port-forward=false --interactive=false: (15.038237678s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5b65ddd4b8-57nh6" [dd023a0b-8a9f-4787-bbbc-c612484ee831] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010502027s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6577d88969-x7xzb" [f51e5b4c-443c-4289-a5a9-c50cb609af86] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00622785s
helpers_test.go:175: Cleaning up "skaffold-244312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-244312
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-244312: (2.457353516s)
--- PASS: TestSkaffold (55.96s)

                                                
                                    
x
+
TestInsufficientStorage (11.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-132865 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0128 18:48:32.991199   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-132865 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.779829365s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"112308a5-4c14-4419-a4c8-888f2e28cf9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-132865] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"283531d8-fc3e-421d-9aca-faf96ee167a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"5a925062-3444-4d15-afa2-d1f86c35650a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1acc9d3-96b7-4356-8bda-f870c35146b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig"}}
	{"specversion":"1.0","id":"aba35f83-de62-4949-b8b5-48f2c6b5405d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube"}}
	{"specversion":"1.0","id":"67d6960d-734a-45d4-a0ac-19cb618b9daa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"99d7d1a0-8f16-4b1e-a961-c40678e72951","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"477e6c07-0faf-4216-9def-ef011e25063a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3a83f09b-4dca-48af-adb4-385f935b7d2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fed8c250-bbe3-4ab6-a4f5-90f5eaf275e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcb1d93a-08a5-4bf1-8c92-fb5b60e6e62c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ec2b6f08-5e98-4cfe-9826-18d88db9826d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-132865 in cluster insufficient-storage-132865","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4b5d8fe-fac7-4ffe-ba20-1ad90e4d43d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1589a0c7-c989-4fff-9d7a-5676b7aa7c16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"151fe0f8-7f72-45f3-878f-86b03ad0976d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-132865 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-132865 --output=json --layout=cluster: exit status 7 (342.135343ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132865","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132865","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 18:48:37.672416  200941 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-132865" does not appear in /home/jenkins/minikube-integration/15565-3259/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-132865 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-132865 --output=json --layout=cluster: exit status 7 (345.100329ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132865","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132865","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 18:48:38.018183  201050 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-132865" does not appear in /home/jenkins/minikube-integration/15565-3259/kubeconfig
	E0128 18:48:38.026106  201050 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/insufficient-storage-132865/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-132865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-132865
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-132865: (1.787334886s)
--- PASS: TestInsufficientStorage (11.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.1692971901.exe start -p running-upgrade-557355 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.1692971901.exe start -p running-upgrade-557355 --memory=2200 --vm-driver=docker  --container-runtime=docker: (33.891603953s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-557355 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-557355 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.566864985s)
helpers_test.go:175: Cleaning up "running-upgrade-557355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-557355
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-557355: (2.215067218s)
--- PASS: TestRunningBinaryUpgrade (61.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (122.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.894141024s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-637426

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-637426: (13.475105536s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-637426 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-637426 status --format={{.Host}}: exit status 7 (117.646252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.006946079s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-637426 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (122.587111ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-637426] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-637426
	    minikube start -p kubernetes-upgrade-637426 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6374262 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-637426 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-637426 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.037751975s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-637426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-637426
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-637426: (2.409148496s)
--- PASS: TestKubernetesUpgrade (122.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.4176380076.exe start -p missing-upgrade-963140 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.4176380076.exe start -p missing-upgrade-963140 --memory=2200 --driver=docker  --container-runtime=docker: (1m4.394839294s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-963140
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-963140: (4.156361095s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-963140
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-963140 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-963140 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.042630226s)
helpers_test.go:175: Cleaning up "missing-upgrade-963140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-963140
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-963140: (2.326117531s)
--- PASS: TestMissingContainerUpgrade (115.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (82.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.2808933344.exe start -p stopped-upgrade-401220 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.2808933344.exe start -p stopped-upgrade-401220 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.377575134s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.2808933344.exe -p stopped-upgrade-401220 stop
E0128 18:51:34.847200   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.2808933344.exe -p stopped-upgrade-401220 stop: (12.213598575s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-401220 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-401220 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.883782172s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (82.48s)

                                                
                                    
x
+
TestPause/serial/Start (49.59s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-575035 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-575035 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (49.588203475s)
--- PASS: TestPause/serial/Start (49.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-575035 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-575035 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.310804183s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-401220
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-401220: (1.468818337s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028674 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-028674 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (126.234283ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-028674] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028674 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028674 --driver=docker  --container-runtime=docker: (30.922963527s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-028674 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (48.623018644s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028674 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028674 --no-kubernetes --driver=docker  --container-runtime=docker: (13.662653679s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-028674 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-028674 status -o json: exit status 2 (379.808361ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-028674","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-028674
E0128 18:52:48.070577   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-028674: (1.911529498s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.95s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-575035 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-575035 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-575035 --output=json --layout=cluster: exit status 2 (388.535597ms)

                                                
                                                
-- stdout --
	{"Name":"pause-575035","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-575035","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-575035 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-575035 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.36s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-575035 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-575035 --alsologtostderr -v=5: (2.363153963s)
--- PASS: TestPause/serial/DeletePaused (2.36s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (31.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (31.663950349s)
pause_test.go:168: (dbg) Run:  docker ps -a

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:173: (dbg) Run:  docker volume inspect pause-575035
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-575035: exit status 1 (29.405133ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-575035

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (31.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028674 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028674 --no-kubernetes --driver=docker  --container-runtime=docker: (5.451631626s)
--- PASS: TestNoKubernetes/serial/Start (5.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-028674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-028674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.449518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (20.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (20.06644954s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (20.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-gsm9w" [900febb5-fbb3-4cab-bbfd-05a03e366f17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-gsm9w" [900febb5-fbb3-4cab-bbfd-05a03e366f17] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005758507s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (49.33835658s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-028674
E0128 18:53:16.076587   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 18:53:16.081859   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 18:53:16.092232   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 18:53:16.112557   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 18:53:16.152837   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-028674: (1.331705634s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-677852 exec deployment/netcat -- nslookup kubernetes.default
E0128 18:53:16.233179   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0128 18:53:16.396088   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0128 18:53:16.717046   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028674 --driver=docker  --container-runtime=docker
E0128 18:53:17.358067   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 18:53:18.639162   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 18:53:21.200000   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028674 --driver=docker  --container-runtime=docker: (7.260633047s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-028674 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-028674 "sudo systemctl is-active --quiet service kubelet": exit status 1 (436.254768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.582796273s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0128 18:53:32.990756   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 18:53:36.561724   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (49.279512305s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (49.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0128 18:53:57.042589   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (49.525980991s)
--- PASS: TestNetworkPlugins/group/false/Start (49.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rkphk" [032440df-9008-4523-81f5-7e741da3a8f7] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013307475s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-n4zfq" [afa028c5-0645-499b-a1c4-9e52afcc499f] Pending
helpers_test.go:344: "netcat-694fc96674-n4zfq" [afa028c5-0645-499b-a1c4-9e52afcc499f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-n4zfq" [afa028c5-0645-499b-a1c4-9e52afcc499f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008634342s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-j6wg9" [42f7dcfc-4d84-46b4-93fc-ee58deb56b9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-j6wg9" [42f7dcfc-4d84-46b4-93fc-ee58deb56b9e] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.009865153s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-lq9nm" [7486df3b-f186-46dc-a187-b110f2b8873f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:344: "netcat-694fc96674-lq9nm" [7486df3b-f186-46dc-a187-b110f2b8873f] Running
E0128 18:54:38.002798   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.010475044s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kvtnj" [73fc0dcc-14e9-4b20-9bd2-9ffebf5321dc] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019529945s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4cmlr" [34fba50e-6e71-4bad-aae4-b150c33d72aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:344: "netcat-694fc96674-4cmlr" [34fba50e-6e71-4bad-aae4-b150c33d72aa] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.092868993s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (55.562760966s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (59.775076023s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0128 18:55:11.801210   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (50.804172614s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (44.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-677852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (44.779907391s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (44.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-46zcr" [31e0d1e3-03af-4dfa-a78e-e2c47aa9782d] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.01281054s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-v6rxw" [c225599d-6d5d-4084-9b48-0dc0d2ce817b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-v6rxw" [c225599d-6d5d-4084-9b48-0dc0d2ce817b] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006749597s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-677852 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-hsd5q" [2446f7a4-b4c2-4881-8496-cb6f044d2bc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:344: "netcat-694fc96674-hsd5q" [2446f7a4-b4c2-4881-8496-cb6f044d2bc4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.014868553s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-9vxrz" [032a0be3-40a2-4a7b-bb7c-bc1f4a3bcf18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:344: "netcat-694fc96674-9vxrz" [032a0be3-40a2-4a7b-bb7c-bc1f4a3bcf18] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006804911s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-677852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-677852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bsjls" [6100c978-d281-474d-8e52-f85d9bcd8cca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-bsjls" [6100c978-d281-474d-8e52-f85d9bcd8cca] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006761468s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-677852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-677852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E0128 19:01:41.746373   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:42.407161   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:45.536412   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:48.760960   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 19:02:04.137195   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:02:06.463873   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 19:02:16.214970   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 19:02:18.225007   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 19:02:22.707271   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:02:23.368190   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:02:26.497051   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:02:48.070419   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 19:03:07.142325   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 19:03:16.075506   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 19:03:26.058075   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:03:28.029042   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:28.034299   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:28.044522   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:28.064790   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:28.105145   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:28.185475   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:28.345909   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:28.666544   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:29.307214   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:30.587779   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:32.990708   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
E0128 19:03:33.147949   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:34.823523   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 19:03:38.268366   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:03:44.628123   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:03:45.289088   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:03:48.417629   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:03:48.508888   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:04:04.912576   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 19:04:08.989725   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:04:22.618594   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 19:04:32.372021   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 19:04:32.601436   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 19:04:34.383505   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 19:04:49.950827   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:04:50.304221   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 19:05:00.055646   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 19:05:02.065998   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 19:05:11.801440   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 19:05:42.214065   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:05:51.120047   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 19:06:00.784629   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:06:01.447181   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:06:04.575585   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:06:09.899280   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:06:11.871145   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/old-k8s-version-584226/client.crt: no such file or directory
E0128 19:06:28.468382   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:06:29.129684   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:06:32.258398   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-584226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-584226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m3.892283025s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-315044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-315044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (47.80240867s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-309493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-309493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (55.641059386s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-065751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-065751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (51.210397533s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-315044 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9ee4ae5-5f8f-4ab4-8258-56a9b345de5e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9ee4ae5-5f8f-4ab4-8258-56a9b345de5e] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.014247434s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-315044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-065751 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [46dbd372-368e-4a43-a4f2-262a1c875c77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
helpers_test.go:344: "busybox" [46dbd372-368e-4a43-a4f2-262a1c875c77] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.011431275s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-065751 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-315044 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-315044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-309493 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [494315d9-ff9d-442c-9e23-440db22f27c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:344: "busybox" [494315d9-ff9d-442c-9e23-440db22f27c9] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.011212792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-309493 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-315044 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-315044 --alsologtostderr -v=3: (10.764753913s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-065751 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-065751 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-065751 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-065751 --alsologtostderr -v=3: (10.957231982s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-309493 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-309493 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-309493 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-309493 --alsologtostderr -v=3: (11.044623093s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-315044 -n embed-certs-315044
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-315044 -n embed-certs-315044: exit status 7 (100.327528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-315044 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (560.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-315044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0128 18:57:48.070296   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-315044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m20.4334793s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-315044 -n embed-certs-315044
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (560.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751: exit status 7 (151.993178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-065751 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (572.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-065751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-065751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m32.530606092s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (572.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309493 -n no-preload-309493
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309493 -n no-preload-309493: exit status 7 (120.608548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-309493 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (559.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-309493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0128 18:58:07.141667   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:07.146929   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:07.157235   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:07.177503   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:07.217796   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:07.298141   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:07.458554   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:07.779616   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:08.420512   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:09.700839   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:12.260985   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:16.075581   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
E0128 18:58:17.381595   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:58:27.621822   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-309493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m18.693133935s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-309493 -n no-preload-309493
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (559.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-584226 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [78e4b9c5-468f-489d-aee7-73ffe7deeefb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [78e4b9c5-468f-489d-aee7-73ffe7deeefb] Running
E0128 18:58:32.990517   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013551976s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-584226 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-584226 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-584226 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-584226 --alsologtostderr -v=3
E0128 18:58:43.763562   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/skaffold-244312/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-584226 --alsologtostderr -v=3: (10.872689412s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-584226 -n old-k8s-version-584226
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-584226 -n old-k8s-version-584226: exit status 7 (107.323782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-584226 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (66.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-584226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0128 18:58:48.101972   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:59:04.913453   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:04.918733   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:04.929860   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:04.950243   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:04.990929   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:05.071594   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:05.232634   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:05.553282   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:06.194176   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:07.475248   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:10.036636   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:15.157772   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:22.619197   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:22.624530   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:22.634830   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:22.655094   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:22.695444   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:22.776562   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:22.936955   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:23.257638   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:23.898234   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:25.178755   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:25.398455   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:27.739577   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:29.062378   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 18:59:32.372756   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:32.377997   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:32.388179   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:32.408420   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:32.448737   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:32.529141   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:32.689648   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:32.860092   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:33.010545   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:33.650720   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:34.383423   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:34.389341   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:34.399638   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:34.419950   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:34.460253   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:34.540556   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:34.700668   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:34.931020   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:35.021227   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:35.661390   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:36.942191   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:37.491416   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:39.502783   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:42.611929   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 18:59:43.100861   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 18:59:44.623238   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 18:59:45.879145   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 18:59:52.852686   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-584226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (1m5.747589604s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-584226 -n old-k8s-version-584226
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (66.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2grll" [68d885a5-7ee7-4f4a-94da-fe2a95add704] Running
E0128 18:59:54.863451   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011312134s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2grll" [68d885a5-7ee7-4f4a-94da-fe2a95add704] Running
E0128 19:00:03.581831   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006312236s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-584226 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-584226 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-584226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-584226 -n old-k8s-version-584226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-584226 -n old-k8s-version-584226: exit status 2 (394.057503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-584226 -n old-k8s-version-584226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-584226 -n old-k8s-version-584226: exit status 2 (395.455854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-584226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-584226 -n old-k8s-version-584226
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-584226 -n old-k8s-version-584226
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-360728 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0128 19:00:11.801078   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 19:00:13.333684   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 19:00:15.343814   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 19:00:26.839874   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kindnet-677852/client.crt: no such file or directory
E0128 19:00:42.213843   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:42.219056   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:42.229323   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:42.249588   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:42.289879   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:42.370184   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:42.530835   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:42.851574   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:43.492238   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:44.543001   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/custom-flannel-677852/client.crt: no such file or directory
E0128 19:00:44.773099   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:47.334046   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:00:50.982614   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/auto-677852/client.crt: no such file or directory
E0128 19:00:52.454729   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-360728 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (42.896500901s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-360728 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-360728 --alsologtostderr -v=3
E0128 19:00:54.294186   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/false-677852/client.crt: no such file or directory
E0128 19:00:56.304346   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/calico-677852/client.crt: no such file or directory
E0128 19:01:00.784349   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:00.789689   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:00.799934   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:00.820228   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:00.860549   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:00.940883   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:01.101276   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:01.421693   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:01.446925   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:01.452206   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:01.462485   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:01.482840   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:01.523159   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:01.603498   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:01.763913   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:02.062464   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:02.084690   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:02.695751   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:01:02.724858   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:03.343419   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:04.005001   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:04.575322   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:04.580625   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:04.590877   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:04.611156   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:04.651448   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:04.731719   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:04.892851   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-360728 --alsologtostderr -v=3: (10.91349411s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-360728 -n newest-cni-360728
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-360728 -n newest-cni-360728: exit status 7 (150.87004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-360728 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (28.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-360728 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0128 19:01:05.213212   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:05.854302   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:05.904512   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:06.565234   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:07.134966   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:09.695154   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:11.024664   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:11.686028   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:14.815363   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
E0128 19:01:21.265809   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/enable-default-cni-677852/client.crt: no such file or directory
E0128 19:01:21.926469   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/bridge-677852/client.crt: no such file or directory
E0128 19:01:23.176110   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/flannel-677852/client.crt: no such file or directory
E0128 19:01:25.055761   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/kubenet-677852/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-360728 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (27.599614893s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-360728 -n newest-cni-360728
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (28.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-360728 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-360728 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-360728 -n newest-cni-360728
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-360728 -n newest-cni-360728: exit status 2 (393.023384ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-360728 -n newest-cni-360728
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-360728 -n newest-cni-360728: exit status 2 (394.903825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-360728 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-360728 -n newest-cni-360728
E0128 19:01:36.041016   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-360728 -n newest-cni-360728
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-qxqkl" [a32f1988-3ea1-46c9-9e40-b8d999a9f7ec] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01292587s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-qxqkl" [a32f1988-3ea1-46c9-9e40-b8d999a9f7ec] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006658428s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-315044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-js9l4" [b53dcfe5-6bf1-4cf2-b978-644754e1168f] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011752223s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-315044 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-315044 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-315044 -n embed-certs-315044
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-315044 -n embed-certs-315044: exit status 2 (378.885848ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-315044 -n embed-certs-315044

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-315044 -n embed-certs-315044: exit status 2 (414.962142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-315044 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-315044 -n embed-certs-315044
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-315044 -n embed-certs-315044
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-js9l4" [b53dcfe5-6bf1-4cf2-b978-644754e1168f] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008143529s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-309493 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-309493 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-309493 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309493 -n no-preload-309493
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309493 -n no-preload-309493: exit status 2 (372.890774ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-309493 -n no-preload-309493

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-309493 -n no-preload-309493: exit status 2 (397.521373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-309493 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-309493 -n no-preload-309493
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-309493 -n no-preload-309493
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-4pll7" [36f7ab84-9fac-4345-a77d-842aae8dc26c] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012682851s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-4pll7" [36f7ab84-9fac-4345-a77d-842aae8dc26c] Running
E0128 19:07:35.402279   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
E0128 19:07:35.407519   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
E0128 19:07:35.417776   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
E0128 19:07:35.438090   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
E0128 19:07:35.478359   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
E0128 19:07:35.558678   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
E0128 19:07:35.719083   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006241664s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-065751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-065751 "sudo crictl images -o json"
E0128 19:07:36.039660   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-065751 --alsologtostderr -v=1
E0128 19:07:36.680482   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751: exit status 2 (370.800105ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751: exit status 2 (371.595507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-065751 --alsologtostderr -v=1
E0128 19:07:37.961711   10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/no-preload-309493/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-065751 -n default-k8s-diff-port-065751
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    

Test skip (19/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium
panic.go:522: 
----------------------- debugLogs start: cilium-677852 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-677852" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-677852

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-677852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-677852"

                                                
                                                
----------------------- debugLogs end: cilium-677852 [took: 6.225092752s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-677852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-677852
--- SKIP: TestNetworkPlugins/group/cilium (6.74s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-560827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-560827
--- SKIP: TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                    
Copied to clipboard