Test Report: Docker_Linux 15565

                    
                      3055562a73e3eb609a1971b4f703ef7d8b32cd43:2023-01-24:27570
                    
                

Test fail (1/308)

Order failed test Duration
205 TestMultiNode/serial/StartAfterStop 149.19
x
+
TestMultiNode/serial/StartAfterStop (149.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 node start m03 --alsologtostderr
E0124 17:48:16.866419   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:48:44.550305   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-585561 node start m03 --alsologtostderr: exit status 80 (2m26.125838989s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-585561-m03 in cluster multinode-585561
	* Pulling base image ...
	* Restarting existing docker container for "multinode-585561-m03" ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 17:47:18.652035  147217 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:47:18.652212  147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:47:18.652224  147217 out.go:309] Setting ErrFile to fd 2...
	I0124 17:47:18.652231  147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:47:18.652405  147217 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	I0124 17:47:18.652748  147217 mustload.go:65] Loading cluster: multinode-585561
	I0124 17:47:18.653093  147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:47:18.653518  147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
	W0124 17:47:18.677624  147217 host.go:58] "multinode-585561-m03" host status: Stopped
	I0124 17:47:18.681146  147217 out.go:177] * Starting worker node multinode-585561-m03 in cluster multinode-585561
	I0124 17:47:18.683581  147217 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 17:47:18.685152  147217 out.go:177] * Pulling base image ...
	I0124 17:47:18.686621  147217 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:47:18.686660  147217 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 17:47:18.686662  147217 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 17:47:18.686688  147217 cache.go:57] Caching tarball of preloaded images
	I0124 17:47:18.686804  147217 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 17:47:18.686816  147217 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0124 17:47:18.686914  147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
	I0124 17:47:18.710140  147217 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 17:47:18.710170  147217 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 17:47:18.710189  147217 cache.go:193] Successfully downloaded all kic artifacts
	I0124 17:47:18.710231  147217 start.go:364] acquiring machines lock for multinode-585561-m03: {Name:mk1e51c84cfdfd4bc99cc8c668c0ed893d777e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 17:47:18.710304  147217 start.go:368] acquired machines lock for "multinode-585561-m03" in 50.145µs
	I0124 17:47:18.710329  147217 start.go:96] Skipping create...Using existing machine configuration
	I0124 17:47:18.710342  147217 fix.go:55] fixHost starting: m03
	I0124 17:47:18.710576  147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
	I0124 17:47:18.735820  147217 fix.go:103] recreateIfNeeded on multinode-585561-m03: state=Stopped err=<nil>
	W0124 17:47:18.735860  147217 fix.go:129] unexpected machine state, will restart: <nil>
	I0124 17:47:18.738309  147217 out.go:177] * Restarting existing docker container for "multinode-585561-m03" ...
	I0124 17:47:18.740196  147217 cli_runner.go:164] Run: docker start multinode-585561-m03
	I0124 17:47:19.100258  147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
	I0124 17:47:19.126249  147217 kic.go:426] container "multinode-585561-m03" state is running.
	I0124 17:47:19.126717  147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
	I0124 17:47:19.151082  147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
	I0124 17:47:19.151317  147217 machine.go:88] provisioning docker machine ...
	I0124 17:47:19.151359  147217 ubuntu.go:169] provisioning hostname "multinode-585561-m03"
	I0124 17:47:19.151414  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:19.175734  147217 main.go:141] libmachine: Using SSH client type: native
	I0124 17:47:19.175896  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0124 17:47:19.175917  147217 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-585561-m03 && echo "multinode-585561-m03" | sudo tee /etc/hostname
	I0124 17:47:19.176544  147217 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38658->127.0.0.1:32867: read: connection reset by peer
	I0124 17:47:22.321766  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561-m03
	
	I0124 17:47:22.321857  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:22.345338  147217 main.go:141] libmachine: Using SSH client type: native
	I0124 17:47:22.345510  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0124 17:47:22.345539  147217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-585561-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-585561-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 17:47:22.476699  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 17:47:22.476724  147217 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
	I0124 17:47:22.476757  147217 ubuntu.go:177] setting up certificates
	I0124 17:47:22.476768  147217 provision.go:83] configureAuth start
	I0124 17:47:22.476824  147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
	I0124 17:47:22.501758  147217 provision.go:138] copyHostCerts
	I0124 17:47:22.501830  147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
	I0124 17:47:22.501842  147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
	I0124 17:47:22.501907  147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
	I0124 17:47:22.501995  147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
	I0124 17:47:22.502003  147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
	I0124 17:47:22.502026  147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
	I0124 17:47:22.502074  147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
	I0124 17:47:22.502081  147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
	I0124 17:47:22.502100  147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
	I0124 17:47:22.502142  147217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561-m03]
	I0124 17:47:22.668018  147217 provision.go:172] copyRemoteCerts
	I0124 17:47:22.668080  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 17:47:22.668111  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:22.692791  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
	I0124 17:47:22.788105  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 17:47:22.806070  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0124 17:47:22.823781  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0124 17:47:22.842616  147217 provision.go:86] duration metric: configureAuth took 365.836584ms
	I0124 17:47:22.842641  147217 ubuntu.go:193] setting minikube options for container-runtime
	I0124 17:47:22.842831  147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:47:22.842893  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:22.867911  147217 main.go:141] libmachine: Using SSH client type: native
	I0124 17:47:22.868086  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0124 17:47:22.868102  147217 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 17:47:23.001163  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 17:47:23.001193  147217 ubuntu.go:71] root file system type: overlay
	I0124 17:47:23.001427  147217 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 17:47:23.001498  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:23.025778  147217 main.go:141] libmachine: Using SSH client type: native
	I0124 17:47:23.025966  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0124 17:47:23.026067  147217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 17:47:23.166292  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 17:47:23.166375  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:23.190853  147217 main.go:141] libmachine: Using SSH client type: native
	I0124 17:47:23.190998  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
	I0124 17:47:23.191016  147217 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 17:47:23.324396  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 17:47:23.324433  147217 machine.go:91] provisioned docker machine in 4.173084631s
	I0124 17:47:23.324446  147217 start.go:300] post-start starting for "multinode-585561-m03" (driver="docker")
	I0124 17:47:23.324454  147217 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 17:47:23.324515  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 17:47:23.324558  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:23.348407  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
	I0124 17:47:23.440010  147217 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 17:47:23.442722  147217 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 17:47:23.442745  147217 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 17:47:23.442754  147217 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 17:47:23.442780  147217 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 17:47:23.442789  147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
	I0124 17:47:23.442840  147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
	I0124 17:47:23.442906  147217 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
	I0124 17:47:23.442974  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 17:47:23.449741  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
	I0124 17:47:23.468022  147217 start.go:303] post-start completed in 143.560276ms
	I0124 17:47:23.468094  147217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 17:47:23.468134  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:23.494231  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
	I0124 17:47:23.585067  147217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 17:47:23.588891  147217 fix.go:57] fixHost completed within 4.878541027s
	I0124 17:47:23.588914  147217 start.go:83] releasing machines lock for "multinode-585561-m03", held for 4.878596131s
	I0124 17:47:23.588982  147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
	I0124 17:47:23.612903  147217 ssh_runner.go:195] Run: systemctl --version
	I0124 17:47:23.612946  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:23.612959  147217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0124 17:47:23.613044  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
	I0124 17:47:23.638647  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
	I0124 17:47:23.638965  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
	I0124 17:47:23.755965  147217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 17:47:23.760398  147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 17:47:23.776928  147217 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 17:47:23.777078  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0124 17:47:23.784224  147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0124 17:47:23.797692  147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0124 17:47:23.804694  147217 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0124 17:47:23.804719  147217 start.go:472] detecting cgroup driver to use...
	I0124 17:47:23.804747  147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 17:47:23.804879  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 17:47:23.817954  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0124 17:47:23.826669  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 17:47:23.835181  147217 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 17:47:23.835257  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 17:47:23.844020  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 17:47:23.852138  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 17:47:23.860203  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 17:47:23.868592  147217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 17:47:23.875995  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 17:47:23.884137  147217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 17:47:23.890671  147217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 17:47:23.897543  147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:47:23.988579  147217 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 17:47:24.071300  147217 start.go:472] detecting cgroup driver to use...
	I0124 17:47:24.071348  147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 17:47:24.071391  147217 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 17:47:24.081973  147217 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 17:47:24.082024  147217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 17:47:24.092267  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 17:47:24.106914  147217 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 17:47:24.194206  147217 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 17:47:24.287286  147217 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 17:47:24.287322  147217 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 17:47:24.301177  147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:47:24.385805  147217 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 17:47:24.634111  147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 17:47:24.719149  147217 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 17:47:24.799006  147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 17:47:24.876798  147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:47:24.951113  147217 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 17:47:24.966517  147217 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 17:47:24.966575  147217 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 17:47:24.970021  147217 start.go:540] Will wait 60s for crictl version
	I0124 17:47:24.970073  147217 ssh_runner.go:195] Run: which crictl
	I0124 17:47:24.973057  147217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 17:47:25.051636  147217 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 17:47:25.051708  147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 17:47:25.078629  147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 17:47:25.108359  147217 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 17:47:25.108449  147217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:47:25.208629  147217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-24 17:47:25.130756489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:47:25.208751  147217 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 17:47:25.231655  147217 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0124 17:47:25.235105  147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 17:47:25.244533  147217 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.4
	I0124 17:47:25.244572  147217 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:47:25.244721  147217 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
	I0124 17:47:25.244772  147217 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
	I0124 17:47:25.244875  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
	W0124 17:47:25.244914  147217 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
	I0124 17:47:25.244927  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
	I0124 17:47:25.244963  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
	I0124 17:47:25.244998  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
	I0124 17:47:25.245039  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
	I0124 17:47:25.245092  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
	I0124 17:47:25.245636  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 17:47:25.263205  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0124 17:47:25.279976  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 17:47:25.297082  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 17:47:25.314430  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
	I0124 17:47:25.331786  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 17:47:25.349013  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
	I0124 17:47:25.366363  147217 ssh_runner.go:195] Run: openssl version
	I0124 17:47:25.371190  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
	I0124 17:47:25.378805  147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
	I0124 17:47:25.382155  147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
	I0124 17:47:25.382196  147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
	I0124 17:47:25.386851  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 17:47:25.394234  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 17:47:25.401432  147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:47:25.404313  147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:47:25.404366  147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:47:25.409124  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 17:47:25.416025  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
	I0124 17:47:25.423551  147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
	I0124 17:47:25.426645  147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
	I0124 17:47:25.426699  147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
	I0124 17:47:25.431581  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
	I0124 17:47:25.438759  147217 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 17:47:25.506238  147217 cni.go:84] Creating CNI manager for ""
	I0124 17:47:25.506261  147217 cni.go:136] 3 nodes found, recommending kindnet
	I0124 17:47:25.506269  147217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 17:47:25.506288  147217 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 17:47:25.506477  147217 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-585561-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 17:47:25.506583  147217 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 17:47:25.506647  147217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0124 17:47:25.513891  147217 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 17:47:25.513959  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0124 17:47:25.520827  147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0124 17:47:25.533248  147217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 17:47:25.545607  147217 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0124 17:47:25.548534  147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 17:47:25.557708  147217 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:47:25.557886  147217 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0124 17:47:25.557955  147217 addons.go:65] Setting storage-provisioner=true in profile "multinode-585561"
	I0124 17:47:25.557964  147217 addons.go:65] Setting default-storageclass=true in profile "multinode-585561"
	I0124 17:47:25.557974  147217 addons.go:227] Setting addon storage-provisioner=true in "multinode-585561"
	W0124 17:47:25.557982  147217 addons.go:236] addon storage-provisioner should already be in state true
	I0124 17:47:25.557994  147217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-585561"
	I0124 17:47:25.558054  147217 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:47:25.557994  147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:47:25.557988  147217 start.go:288] JoinCluster: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metall
b:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:47:25.558142  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0124 17:47:25.558192  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:47:25.558320  147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:47:25.558508  147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:47:25.588701  147217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 17:47:25.586744  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:47:25.590558  147217 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 17:47:25.590579  147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0124 17:47:25.590626  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:47:25.600319  147217 addons.go:227] Setting addon default-storageclass=true in "multinode-585561"
	W0124 17:47:25.600342  147217 addons.go:236] addon default-storageclass should already be in state true
	I0124 17:47:25.600364  147217 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:47:25.600751  147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:47:25.620738  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:47:25.627555  147217 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0124 17:47:25.627579  147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0124 17:47:25.627621  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:47:25.653298  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:47:25.723057  147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 17:47:25.741816  147217 start.go:301] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:47:25.741868  147217 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:47:25.742149  147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-585561-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0124 17:47:25.742194  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:47:25.759614  147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0124 17:47:25.771204  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:47:26.089951  147217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0124 17:47:26.091518  147217 addons.go:488] enableAddons completed in 533.631743ms
	I0124 17:47:26.167061  147217 node.go:109] successfully drained node "m03"
	I0124 17:47:26.171811  147217 node.go:125] successfully deleted node "m03"
	I0124 17:47:26.171837  147217 start.go:305] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:47:26.171858  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:47:26.171877  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
	E0124 17:47:26.374254  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:47:26.210765    1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:26.374279  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
	I0124 17:47:26.374290  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0124 17:47:26.412768  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:26.412814  147217 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:47:26.210765    1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:37.460571  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:47:37.460619  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
	E0124 17:47:37.616489  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:47:37.498525    1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:37.616527  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
	I0124 17:47:37.616543  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0124 17:47:37.653897  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:37.653931  147217 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:47:37.498525    1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:59.262453  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:47:59.262504  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
	E0124 17:47:59.420123  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:47:59.298142    2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:59.420152  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
	I0124 17:47:59.420168  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0124 17:47:59.459780  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:47:59.459822  147217 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:47:59.298142    2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:48:25.662843  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:48:25.662895  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
	E0124 17:48:25.817871  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:48:25.697923    2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:48:25.817906  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
	I0124 17:48:25.817920  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0124 17:48:25.857241  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:48:25.857277  147217 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:48:25.697923    2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:48:57.505667  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:48:57.505727  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
	E0124 17:48:57.660095  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:48:57.541681    2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:48:57.660123  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
	I0124 17:48:57.660137  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0124 17:48:57.698398  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:48:57.698425  147217 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:48:57.541681    2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:49:44.508544  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0124 17:49:44.508609  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
	E0124 17:49:44.662615  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:49:44.543023    3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:49:44.662638  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
	I0124 17:49:44.662652  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
	I0124 17:49:44.700290  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0124 17:49:44.700323  147217 start.go:290] JoinCluster complete in 2m19.142337617s
	I0124 17:49:44.703679  147217 out.go:177] 
	W0124 17:49:44.705552  147217 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:49:44.543023    3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1027-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0124 17:49:44.543023    3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 17:49:44.705572  147217 out.go:239] * 
	* 
	W0124 17:49:44.707691  147217 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 17:49:44.709648  147217 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:254: I0124 17:47:18.652035  147217 out.go:296] Setting OutFile to fd 1 ...
I0124 17:47:18.652212  147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:47:18.652224  147217 out.go:309] Setting ErrFile to fd 2...
I0124 17:47:18.652231  147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:47:18.652405  147217 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
I0124 17:47:18.652748  147217 mustload.go:65] Loading cluster: multinode-585561
I0124 17:47:18.653093  147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:18.653518  147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
W0124 17:47:18.677624  147217 host.go:58] "multinode-585561-m03" host status: Stopped
I0124 17:47:18.681146  147217 out.go:177] * Starting worker node multinode-585561-m03 in cluster multinode-585561
I0124 17:47:18.683581  147217 cache.go:120] Beginning downloading kic base image for docker with docker
I0124 17:47:18.685152  147217 out.go:177] * Pulling base image ...
I0124 17:47:18.686621  147217 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:47:18.686660  147217 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0124 17:47:18.686662  147217 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
I0124 17:47:18.686688  147217 cache.go:57] Caching tarball of preloaded images
I0124 17:47:18.686804  147217 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0124 17:47:18.686816  147217 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
I0124 17:47:18.686914  147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:47:18.710140  147217 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
I0124 17:47:18.710170  147217 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
I0124 17:47:18.710189  147217 cache.go:193] Successfully downloaded all kic artifacts
I0124 17:47:18.710231  147217 start.go:364] acquiring machines lock for multinode-585561-m03: {Name:mk1e51c84cfdfd4bc99cc8c668c0ed893d777e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 17:47:18.710304  147217 start.go:368] acquired machines lock for "multinode-585561-m03" in 50.145µs
I0124 17:47:18.710329  147217 start.go:96] Skipping create...Using existing machine configuration
I0124 17:47:18.710342  147217 fix.go:55] fixHost starting: m03
I0124 17:47:18.710576  147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
I0124 17:47:18.735820  147217 fix.go:103] recreateIfNeeded on multinode-585561-m03: state=Stopped err=<nil>
W0124 17:47:18.735860  147217 fix.go:129] unexpected machine state, will restart: <nil>
I0124 17:47:18.738309  147217 out.go:177] * Restarting existing docker container for "multinode-585561-m03" ...
I0124 17:47:18.740196  147217 cli_runner.go:164] Run: docker start multinode-585561-m03
I0124 17:47:19.100258  147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
I0124 17:47:19.126249  147217 kic.go:426] container "multinode-585561-m03" state is running.
I0124 17:47:19.126717  147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:19.151082  147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:47:19.151317  147217 machine.go:88] provisioning docker machine ...
I0124 17:47:19.151359  147217 ubuntu.go:169] provisioning hostname "multinode-585561-m03"
I0124 17:47:19.151414  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:19.175734  147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:19.175896  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:19.175917  147217 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-585561-m03 && echo "multinode-585561-m03" | sudo tee /etc/hostname
I0124 17:47:19.176544  147217 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38658->127.0.0.1:32867: read: connection reset by peer
I0124 17:47:22.321766  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561-m03

                                                
                                                
I0124 17:47:22.321857  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.345338  147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:22.345510  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:22.345539  147217 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-585561-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-585561-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0124 17:47:22.476699  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0124 17:47:22.476724  147217 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
I0124 17:47:22.476757  147217 ubuntu.go:177] setting up certificates
I0124 17:47:22.476768  147217 provision.go:83] configureAuth start
I0124 17:47:22.476824  147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:22.501758  147217 provision.go:138] copyHostCerts
I0124 17:47:22.501830  147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
I0124 17:47:22.501842  147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
I0124 17:47:22.501907  147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
I0124 17:47:22.501995  147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
I0124 17:47:22.502003  147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
I0124 17:47:22.502026  147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
I0124 17:47:22.502074  147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
I0124 17:47:22.502081  147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
I0124 17:47:22.502100  147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
I0124 17:47:22.502142  147217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561-m03]
I0124 17:47:22.668018  147217 provision.go:172] copyRemoteCerts
I0124 17:47:22.668080  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0124 17:47:22.668111  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.692791  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:22.788105  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0124 17:47:22.806070  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0124 17:47:22.823781  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0124 17:47:22.842616  147217 provision.go:86] duration metric: configureAuth took 365.836584ms
I0124 17:47:22.842641  147217 ubuntu.go:193] setting minikube options for container-runtime
I0124 17:47:22.842831  147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:22.842893  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.867911  147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:22.868086  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:22.868102  147217 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0124 17:47:23.001163  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay

                                                
                                                
I0124 17:47:23.001193  147217 ubuntu.go:71] root file system type: overlay
I0124 17:47:23.001427  147217 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0124 17:47:23.001498  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.025778  147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:23.025966  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:23.026067  147217 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0124 17:47:23.166292  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0124 17:47:23.166375  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.190853  147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:23.190998  147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:23.191016  147217 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0124 17:47:23.324396  147217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0124 17:47:23.324433  147217 machine.go:91] provisioned docker machine in 4.173084631s
I0124 17:47:23.324446  147217 start.go:300] post-start starting for "multinode-585561-m03" (driver="docker")
I0124 17:47:23.324454  147217 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0124 17:47:23.324515  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0124 17:47:23.324558  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.348407  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.440010  147217 ssh_runner.go:195] Run: cat /etc/os-release
I0124 17:47:23.442722  147217 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0124 17:47:23.442745  147217 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0124 17:47:23.442754  147217 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0124 17:47:23.442780  147217 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0124 17:47:23.442789  147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
I0124 17:47:23.442840  147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
I0124 17:47:23.442906  147217 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
I0124 17:47:23.442974  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0124 17:47:23.449741  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:47:23.468022  147217 start.go:303] post-start completed in 143.560276ms
I0124 17:47:23.468094  147217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 17:47:23.468134  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.494231  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.585067  147217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0124 17:47:23.588891  147217 fix.go:57] fixHost completed within 4.878541027s
I0124 17:47:23.588914  147217 start.go:83] releasing machines lock for "multinode-585561-m03", held for 4.878596131s
I0124 17:47:23.588982  147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:23.612903  147217 ssh_runner.go:195] Run: systemctl --version
I0124 17:47:23.612946  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.612959  147217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0124 17:47:23.613044  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.638647  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.638965  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.755965  147217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0124 17:47:23.760398  147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0124 17:47:23.776928  147217 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0124 17:47:23.777078  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0124 17:47:23.784224  147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0124 17:47:23.797692  147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0124 17:47:23.804694  147217 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0124 17:47:23.804719  147217 start.go:472] detecting cgroup driver to use...
I0124 17:47:23.804747  147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:47:23.804879  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:47:23.817954  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0124 17:47:23.826669  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0124 17:47:23.835181  147217 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0124 17:47:23.835257  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0124 17:47:23.844020  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:47:23.852138  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0124 17:47:23.860203  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:47:23.868592  147217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0124 17:47:23.875995  147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0124 17:47:23.884137  147217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0124 17:47:23.890671  147217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0124 17:47:23.897543  147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:23.988579  147217 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0124 17:47:24.071300  147217 start.go:472] detecting cgroup driver to use...
I0124 17:47:24.071348  147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:47:24.071391  147217 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0124 17:47:24.081973  147217 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0124 17:47:24.082024  147217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0124 17:47:24.092267  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:47:24.106914  147217 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0124 17:47:24.194206  147217 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0124 17:47:24.287286  147217 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0124 17:47:24.287322  147217 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0124 17:47:24.301177  147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:24.385805  147217 ssh_runner.go:195] Run: sudo systemctl restart docker
I0124 17:47:24.634111  147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:47:24.719149  147217 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0124 17:47:24.799006  147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:47:24.876798  147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:24.951113  147217 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0124 17:47:24.966517  147217 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0124 17:47:24.966575  147217 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0124 17:47:24.970021  147217 start.go:540] Will wait 60s for crictl version
I0124 17:47:24.970073  147217 ssh_runner.go:195] Run: which crictl
I0124 17:47:24.973057  147217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0124 17:47:25.051636  147217 start.go:556] Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  20.10.22
RuntimeApiVersion:  v1alpha2
I0124 17:47:25.051708  147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:47:25.078629  147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:47:25.108359  147217 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
I0124 17:47:25.108449  147217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0124 17:47:25.208629  147217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-24 17:47:25.130756489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0124 17:47:25.208751  147217 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 17:47:25.231655  147217 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
I0124 17:47:25.235105  147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:47:25.244533  147217 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.4
I0124 17:47:25.244572  147217 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:47:25.244721  147217 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
I0124 17:47:25.244772  147217 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
I0124 17:47:25.244875  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
W0124 17:47:25.244914  147217 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
I0124 17:47:25.244927  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
I0124 17:47:25.244963  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
I0124 17:47:25.244998  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
I0124 17:47:25.245039  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
I0124 17:47:25.245092  147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:47:25.245636  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0124 17:47:25.263205  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0124 17:47:25.279976  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0124 17:47:25.297082  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0124 17:47:25.314430  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
I0124 17:47:25.331786  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0124 17:47:25.349013  147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
I0124 17:47:25.366363  147217 ssh_runner.go:195] Run: openssl version
I0124 17:47:25.371190  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
I0124 17:47:25.378805  147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
I0124 17:47:25.382155  147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
I0124 17:47:25.382196  147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
I0124 17:47:25.386851  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
I0124 17:47:25.394234  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0124 17:47:25.401432  147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.404313  147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.404366  147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.409124  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0124 17:47:25.416025  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
I0124 17:47:25.423551  147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
I0124 17:47:25.426645  147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
I0124 17:47:25.426699  147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
I0124 17:47:25.431581  147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
I0124 17:47:25.438759  147217 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0124 17:47:25.506238  147217 cni.go:84] Creating CNI manager for ""
I0124 17:47:25.506261  147217 cni.go:136] 3 nodes found, recommending kindnet
I0124 17:47:25.506269  147217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0124 17:47:25.506288  147217 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0124 17:47:25.506477  147217 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-585561-m03"
kubeletExtraArgs:
node-ip: 192.168.58.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s

                                                
                                                
I0124 17:47:25.506583  147217 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0124 17:47:25.506647  147217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0124 17:47:25.513891  147217 binaries.go:44] Found k8s binaries, skipping transfer
I0124 17:47:25.513959  147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0124 17:47:25.520827  147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0124 17:47:25.533248  147217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0124 17:47:25.545607  147217 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
I0124 17:47:25.548534  147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:47:25.557708  147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.557886  147217 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I0124 17:47:25.557955  147217 addons.go:65] Setting storage-provisioner=true in profile "multinode-585561"
I0124 17:47:25.557964  147217 addons.go:65] Setting default-storageclass=true in profile "multinode-585561"
I0124 17:47:25.557974  147217 addons.go:227] Setting addon storage-provisioner=true in "multinode-585561"
W0124 17:47:25.557982  147217 addons.go:236] addon storage-provisioner should already be in state true
I0124 17:47:25.557994  147217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-585561"
I0124 17:47:25.558054  147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.557994  147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:25.557988  147217 start.go:288] JoinCluster: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb
:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 17:47:25.558142  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0124 17:47:25.558192  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.558320  147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.558508  147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.588701  147217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:47:25.586744  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.590558  147217 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:47:25.590579  147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0124 17:47:25.590626  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.600319  147217 addons.go:227] Setting addon default-storageclass=true in "multinode-585561"
W0124 17:47:25.600342  147217 addons.go:236] addon default-storageclass should already be in state true
I0124 17:47:25.600364  147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.600751  147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.620738  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.627555  147217 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0124 17:47:25.627579  147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0124 17:47:25.627621  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.653298  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.723057  147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:47:25.741816  147217 start.go:301] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:25.741868  147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.742149  147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-585561-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0124 17:47:25.742194  147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.759614  147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0124 17:47:25.771204  147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:26.089951  147217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0124 17:47:26.091518  147217 addons.go:488] enableAddons completed in 533.631743ms
I0124 17:47:26.167061  147217 node.go:109] successfully drained node "m03"
I0124 17:47:26.171811  147217 node.go:125] successfully deleted node "m03"
I0124 17:47:26.171837  147217 start.go:305] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:26.171858  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:26.171877  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:26.374254  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:47:26.210765    1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:26.374279  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:26.374290  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:26.412768  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:26.412814  147217 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:47:26.210765    1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.460571  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:37.460619  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:37.616489  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:47:37.498525    1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.616527  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:37.616543  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:37.653897  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.653931  147217 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:47:37.498525    1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.262453  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:59.262504  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:59.420123  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:47:59.298142    2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.420152  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:59.420168  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:59.459780  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.459822  147217 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:47:59.298142    2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.662843  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:48:25.662895  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:48:25.817871  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:48:25.697923    2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.817906  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:48:25.817920  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:48:25.857241  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.857277  147217 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:48:25.697923    2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.505667  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:48:57.505727  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:48:57.660095  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:48:57.541681    2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.660123  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:48:57.660137  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:48:57.698398  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.698425  147217 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:48:57.541681    2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.508544  147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:49:44.508609  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:49:44.662615  147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:49:44.543023    3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.662638  147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:49:44.662652  147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:49:44.700290  147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:

                                                
                                                
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.700323  147217 start.go:290] JoinCluster complete in 2m19.142337617s
I0124 17:49:44.703679  147217 out.go:177] 
W0124 17:49:44.705552  147217 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:49:44.543023    3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.15.0-1027-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

                                                
                                                
stderr:
W0124 17:49:44.543023    3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
W0124 17:49:44.705572  147217 out.go:239] * 
* 
W0124 17:49:44.707691  147217 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0124 17:49:44.709648  147217 out.go:177] 
multinode_test.go:255: node start returned an error. args "out/minikube-linux-amd64 -p multinode-585561 node start m03 --alsologtostderr": exit status 80
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-585561
helpers_test.go:235: (dbg) docker inspect multinode-585561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37",
	        "Created": "2023-01-24T17:45:29.114439725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 128759,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T17:45:29.477128533Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/hostname",
	        "HostsPath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/hosts",
	        "LogPath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37-json.log",
	        "Name": "/multinode-585561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-585561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-585561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f-init/diff:/var/lib/docker/overlay2/c0f6dd4fb02f7ad02ac9f070fe21bdce826b05ddd2d4864f5d03facc86ec9ecc/diff:/var/lib/docker/overlay2/d2765ba50729ba695f42f46a7962c3519217eee28174849e85afadbf6b0e02d6/diff:/var/lib/docker/overlay2/309bf5708416378c17fc70427d4f2456f99f7fba90e3a234d34bfe13a2c59f12/diff:/var/lib/docker/overlay2/56f885e6f444248a029fc5b9208073963c6309559557c10307b26dcf0e30a995/diff:/var/lib/docker/overlay2/9ba0736edb7b66db737f51458527fbdb399a0807534f33ddc2f44cda6a8bd6d1/diff:/var/lib/docker/overlay2/f4e07abaa5d333f487a0edb77aad2f0af86ce4fd18c9a638cb401437a32f4d74/diff:/var/lib/docker/overlay2/00d3f326fb5e24a0682a26ab4f237656d873e100c29691fdb55be303b2185d58/diff:/var/lib/docker/overlay2/39df02652678fc73d7f221b726c0a3c6f4d6829085620f3480306ee5366370a8/diff:/var/lib/docker/overlay2/f89bbc718777cb4603fad4be8968b39ceee7410ad49ad3efdec549691abb15e9/diff:/var/lib/docker/overlay2/0bc828
e5958e3308bc5bc21337653e4c70d63cf0250c7a996820d7e263d4b782/diff:/var/lib/docker/overlay2/960bb317e53c181050c19f97b8bdf3f8ea1ee37186960c105f4216b9a1db2749/diff:/var/lib/docker/overlay2/020e2ab5c70c297cee27e775db50c2d397921e19e31d24f8e0fffb93ccc480ee/diff:/var/lib/docker/overlay2/38292f0ce0a8c510703a3889510830c29e47c20fc6b836727d66579217b4aa9c/diff:/var/lib/docker/overlay2/2240207f0bcbbbf807a6a2f426df2f218dbe10587d8c23f4b3470573e5d95fd4/diff:/var/lib/docker/overlay2/5cb29ea4ba6b3e37954a7dcd08d3090fdea350f0feee4ec33fa89009397f4df0/diff:/var/lib/docker/overlay2/e020b8a1019b51428090e953307cfb464abb245cb10162522f9ce462cba4eae3/diff:/var/lib/docker/overlay2/dedc1cd320ab9a914dcc9de1bc6dc55b769c26e01b2e220e5b101264cf3885fd/diff:/var/lib/docker/overlay2/d57af40191f2b434bba5bb6d799793eac2c6cb2d478bd7c64158ab270aa7b748/diff:/var/lib/docker/overlay2/6405dc6842891f477861f193833a331c97a4ca02fce456ec2e80aad9de94b015/diff:/var/lib/docker/overlay2/631e58303634bfa60e5c502ec2f568a62c2b2169ae462f1171b3146cf04f5f7e/diff:/var/lib/d
ocker/overlay2/d29fa359059801155d9599e76a6845758ba216d5ea775b01d6ae8f696b8c456b/diff:/var/lib/docker/overlay2/28b702bccbb33fa6cd49012bc362d952de52ad467f4ea93354db79737ae22b03/diff:/var/lib/docker/overlay2/8a7d52ec1a3e894eed2d4271f1df503d0f8cda630fcd1bc15af62184bdaf3d65/diff:/var/lib/docker/overlay2/c9b7f9ea4c8b40bcc4e5041c580dfe6d3517781f4dfddcda0d8aaa7e109a0ec2/diff:/var/lib/docker/overlay2/df47b021373f0eceb801029054f0d9f0612b49d3207f2d163077ad905f488ee5/diff:/var/lib/docker/overlay2/fcf3520ccb48ac6dadbebea4e85a539d1859a06405e690466352317f35b7f17f/diff:/var/lib/docker/overlay2/4d2edf4c993582a042a54f29c78e7524a1f5846a4f6f25463d664b4a4b03d878/diff:/var/lib/docker/overlay2/672267cb3f0664c4fcacd27e02917b0edeaa3867c70baef5dc534a8ccf798ffb/diff:/var/lib/docker/overlay2/ded6694e77d8f645d1aeb5353d7c912883d93df91f1d122bba1d3eabe5aeb5ca/diff:/var/lib/docker/overlay2/d5d7bc0be8ec3dd554cb0bdff490dbfa92cd679d68e433547ce0a558813ded64/diff:/var/lib/docker/overlay2/d992f24d356c8b6303454fa3c4ed34187fa10b2a85830839330cd2866c1
27932/diff:/var/lib/docker/overlay2/625d4aee0fbd36cfefdd61cff165ebb6ea2c45b21cb93928bc8b16ee0289581b/diff:/var/lib/docker/overlay2/b487e0d1b131079e1ed93646b9aab301046224986d2d47a8083397694a7699ec/diff:/var/lib/docker/overlay2/6acd12e207d6d8b1422a0897a896c591cb9e3301c4c46d83c5a2b6e40019dd19/diff:/var/lib/docker/overlay2/5944c728d3d43299b8773a799379ebcf947ab2447a83f1adcc32731fb40ced3c/diff:/var/lib/docker/overlay2/12c67321e07ad577eba23729dc9d9a2edb3a8d4c7de3a1c682c411c00cd14dac/diff:/var/lib/docker/overlay2/89073ac9d49633306646e6ada568a9647c4a88d657e60fd2a0daa3a2bb970598/diff:/var/lib/docker/overlay2/0a290286677b74fb640d9cd6b48d3579d79f4ca62157270f290b74f6a606adf2/diff:/var/lib/docker/overlay2/fccecd53fbac0d1318c0a0f27a725dbaddd955866823c94258132b2db0e10339/diff:/var/lib/docker/overlay2/3f7d25eebece90d8e38d92efa5522717838a52fcf6de68a61a2f3922139ad36c/diff:/var/lib/docker/overlay2/84563ab9d1af117abaf3eadbdfbcd03d46c79f907aa260d46bf795185eaf69b8/diff:/var/lib/docker/overlay2/112ca0d95ec4e2fcaa4a352262498bde563dd0
dcbe1b5a8fb9635be152bae4f9/diff:/var/lib/docker/overlay2/956687ef2d7ff7d948d0cb4b6415751cd49516ed63b9293d0871ca6c6e99af68/diff:/var/lib/docker/overlay2/edb008e0ceae1ade25c3f42e96590263af39296507e3518acc6462d2b9f227d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-585561",
	                "Source": "/var/lib/docker/volumes/multinode-585561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-585561",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-585561",
	                "name.minikube.sigs.k8s.io": "multinode-585561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c31a4ba7465924ba53f6cfa5e1d4ad4c332e4b060ae018292262ea6df072860",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9c31a4ba7465",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-585561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cff9d026e22c",
	                        "multinode-585561"
	                    ],
	                    "NetworkID": "d58778b719d578917d23962b648cc107a0848a4f0a97bc7f4d60b63c79e3010d",
	                    "EndpointID": "cd6653c36b268ea98f13f5d4e84fec087782f789f4015ce7fb10e4f474a0084f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-585561 -n multinode-585561
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-585561 logs -n 25: (1.097238732s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-585561 cp multinode-585561:/home/docker/cp-test.txt                           | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m03:/home/docker/cp-test_multinode-585561_multinode-585561-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n multinode-585561-m03 sudo cat                                   | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | /home/docker/cp-test_multinode-585561_multinode-585561-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp testdata/cp-test.txt                                                | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2351278162/001/cp-test_multinode-585561-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561:/home/docker/cp-test_multinode-585561-m02_multinode-585561.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n multinode-585561 sudo cat                                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | /home/docker/cp-test_multinode-585561-m02_multinode-585561.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m03:/home/docker/cp-test_multinode-585561-m02_multinode-585561-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n multinode-585561-m03 sudo cat                                   | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | /home/docker/cp-test_multinode-585561-m02_multinode-585561-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp testdata/cp-test.txt                                                | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2351278162/001/cp-test_multinode-585561-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561:/home/docker/cp-test_multinode-585561-m03_multinode-585561.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n multinode-585561 sudo cat                                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | /home/docker/cp-test_multinode-585561-m03_multinode-585561.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt                       | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m02:/home/docker/cp-test_multinode-585561-m03_multinode-585561-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n                                                                 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | multinode-585561-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-585561 ssh -n multinode-585561-m02 sudo cat                                   | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	|         | /home/docker/cp-test_multinode-585561-m03_multinode-585561-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-585561 node stop m03                                                          | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
	| node    | multinode-585561 node start                                                             | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC |                     |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 17:45:22
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 17:45:22.740102  128080 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:45:22.740318  128080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:45:22.740351  128080 out.go:309] Setting ErrFile to fd 2...
	I0124 17:45:22.740363  128080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:45:22.740794  128080 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	I0124 17:45:22.741486  128080 out.go:303] Setting JSON to false
	I0124 17:45:22.742880  128080 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1667,"bootTime":1674580656,"procs":872,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0124 17:45:22.742950  128080 start.go:135] virtualization: kvm guest
	I0124 17:45:22.745812  128080 out.go:177] * [multinode-585561] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0124 17:45:22.747434  128080 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 17:45:22.747386  128080 notify.go:220] Checking for updates...
	I0124 17:45:22.749323  128080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 17:45:22.751222  128080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:45:22.752872  128080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	I0124 17:45:22.754314  128080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0124 17:45:22.755958  128080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 17:45:22.757539  128080 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 17:45:22.784306  128080 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0124 17:45:22.784426  128080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:45:22.877477  128080 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:45:22.803388442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:45:22.877624  128080 docker.go:282] overlay module found
	I0124 17:45:22.879951  128080 out.go:177] * Using the docker driver based on user configuration
	I0124 17:45:22.881441  128080 start.go:296] selected driver: docker
	I0124 17:45:22.881461  128080 start.go:840] validating driver "docker" against <nil>
	I0124 17:45:22.881472  128080 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 17:45:22.882208  128080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:45:22.975806  128080 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:45:22.900637343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:45:22.975935  128080 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0124 17:45:22.976109  128080 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0124 17:45:22.978387  128080 out.go:177] * Using Docker driver with root privileges
	I0124 17:45:22.979872  128080 cni.go:84] Creating CNI manager for ""
	I0124 17:45:22.979887  128080 cni.go:136] 0 nodes found, recommending kindnet
	I0124 17:45:22.979895  128080 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0124 17:45:22.979904  128080 start_flags.go:319] config:
	{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:45:22.981487  128080 out.go:177] * Starting control plane node multinode-585561 in cluster multinode-585561
	I0124 17:45:22.982794  128080 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 17:45:22.984317  128080 out.go:177] * Pulling base image ...
	I0124 17:45:22.985690  128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:45:22.985735  128080 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 17:45:22.985744  128080 cache.go:57] Caching tarball of preloaded images
	I0124 17:45:22.985808  128080 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 17:45:22.985864  128080 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 17:45:22.985880  128080 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0124 17:45:22.986242  128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
	I0124 17:45:22.986265  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json: {Name:mkd32f750addef5e117c6c613ad00e8eb787ff9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:23.008849  128080 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 17:45:23.008887  128080 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 17:45:23.008909  128080 cache.go:193] Successfully downloaded all kic artifacts
	I0124 17:45:23.008950  128080 start.go:364] acquiring machines lock for multinode-585561: {Name:mkedb2101c6d898ca1123ce19efb5691312160a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 17:45:23.009073  128080 start.go:368] acquired machines lock for "multinode-585561" in 98.617µs
	I0124 17:45:23.009100  128080 start.go:93] Provisioning new machine with config: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 17:45:23.009192  128080 start.go:125] createHost starting for "" (driver="docker")
	I0124 17:45:23.011894  128080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0124 17:45:23.012139  128080 start.go:159] libmachine.API.Create for "multinode-585561" (driver="docker")
	I0124 17:45:23.012171  128080 client.go:168] LocalClient.Create starting
	I0124 17:45:23.012239  128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem
	I0124 17:45:23.012279  128080 main.go:141] libmachine: Decoding PEM data...
	I0124 17:45:23.012307  128080 main.go:141] libmachine: Parsing certificate...
	I0124 17:45:23.012394  128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem
	I0124 17:45:23.012425  128080 main.go:141] libmachine: Decoding PEM data...
	I0124 17:45:23.012443  128080 main.go:141] libmachine: Parsing certificate...
	I0124 17:45:23.012836  128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0124 17:45:23.034132  128080 cli_runner.go:211] docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0124 17:45:23.034202  128080 network_create.go:281] running [docker network inspect multinode-585561] to gather additional debugging logs...
	I0124 17:45:23.034219  128080 cli_runner.go:164] Run: docker network inspect multinode-585561
	W0124 17:45:23.055690  128080 cli_runner.go:211] docker network inspect multinode-585561 returned with exit code 1
	I0124 17:45:23.055721  128080 network_create.go:284] error running [docker network inspect multinode-585561]: docker network inspect multinode-585561: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-585561
	I0124 17:45:23.055733  128080 network_create.go:286] output of [docker network inspect multinode-585561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-585561
	
	** /stderr **
	I0124 17:45:23.056076  128080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 17:45:23.079066  128080 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7362ae67aae9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fe:8a:74:74} reservation:<nil>}
	I0124 17:45:23.079704  128080 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003cefd0}
	I0124 17:45:23.079731  128080 network_create.go:123] attempt to create docker network multinode-585561 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0124 17:45:23.079781  128080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-585561 multinode-585561
	I0124 17:45:23.135188  128080 network_create.go:107] docker network multinode-585561 192.168.58.0/24 created
	I0124 17:45:23.135214  128080 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-585561" container
	I0124 17:45:23.135271  128080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0124 17:45:23.156984  128080 cli_runner.go:164] Run: docker volume create multinode-585561 --label name.minikube.sigs.k8s.io=multinode-585561 --label created_by.minikube.sigs.k8s.io=true
	I0124 17:45:23.179713  128080 oci.go:103] Successfully created a docker volume multinode-585561
	I0124 17:45:23.179809  128080 cli_runner.go:164] Run: docker run --rm --name multinode-585561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561 --entrypoint /usr/bin/test -v multinode-585561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0124 17:45:23.761168  128080 oci.go:107] Successfully prepared a docker volume multinode-585561
	I0124 17:45:23.761204  128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:45:23.761225  128080 kic.go:190] Starting extracting preloaded images to volume ...
	I0124 17:45:23.761292  128080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0124 17:45:28.997184  128080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (5.235828225s)
	I0124 17:45:28.997211  128080 kic.go:199] duration metric: took 5.235984 seconds to extract preloaded images to volume
	W0124 17:45:28.997350  128080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0124 17:45:28.997453  128080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0124 17:45:29.091231  128080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-585561 --name multinode-585561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-585561 --network multinode-585561 --ip 192.168.58.2 --volume multinode-585561:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0124 17:45:29.486016  128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Running}}
	I0124 17:45:29.510635  128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:45:29.535825  128080 cli_runner.go:164] Run: docker exec multinode-585561 stat /var/lib/dpkg/alternatives/iptables
	I0124 17:45:29.583614  128080 oci.go:144] the created container "multinode-585561" has a running status.
	I0124 17:45:29.583651  128080 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa...
	I0124 17:45:29.988873  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0124 17:45:29.988916  128080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0124 17:45:30.050707  128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:45:30.074564  128080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0124 17:45:30.074589  128080 kic_runner.go:114] Args: [docker exec --privileged multinode-585561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0124 17:45:30.151420  128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:45:30.174110  128080 machine.go:88] provisioning docker machine ...
	I0124 17:45:30.174164  128080 ubuntu.go:169] provisioning hostname "multinode-585561"
	I0124 17:45:30.174237  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:30.196678  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:45:30.196974  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0124 17:45:30.197003  128080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-585561 && echo "multinode-585561" | sudo tee /etc/hostname
	I0124 17:45:30.337467  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561
	
	I0124 17:45:30.337539  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:30.359909  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:45:30.360054  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0124 17:45:30.360074  128080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-585561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-585561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 17:45:30.488217  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 17:45:30.488252  128080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
	I0124 17:45:30.488276  128080 ubuntu.go:177] setting up certificates
	I0124 17:45:30.488285  128080 provision.go:83] configureAuth start
	I0124 17:45:30.488336  128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561
	I0124 17:45:30.510820  128080 provision.go:138] copyHostCerts
	I0124 17:45:30.510855  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
	I0124 17:45:30.510889  128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
	I0124 17:45:30.510896  128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
	I0124 17:45:30.510972  128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
	I0124 17:45:30.511056  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
	I0124 17:45:30.511075  128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
	I0124 17:45:30.511079  128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
	I0124 17:45:30.511113  128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
	I0124 17:45:30.511167  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
	I0124 17:45:30.511186  128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
	I0124 17:45:30.511195  128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
	I0124 17:45:30.511230  128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
	I0124 17:45:30.511288  128080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561]
	I0124 17:45:30.597657  128080 provision.go:172] copyRemoteCerts
	I0124 17:45:30.597711  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 17:45:30.597741  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:30.620825  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:45:30.715730  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0124 17:45:30.715814  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 17:45:30.733077  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0124 17:45:30.733135  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0124 17:45:30.750117  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0124 17:45:30.750172  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0124 17:45:30.766220  128080 provision.go:86] duration metric: configureAuth took 277.917524ms
	I0124 17:45:30.766248  128080 ubuntu.go:193] setting minikube options for container-runtime
	I0124 17:45:30.766448  128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:45:30.766499  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:30.789654  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:45:30.789821  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0124 17:45:30.789842  128080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 17:45:30.920605  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 17:45:30.920632  128080 ubuntu.go:71] root file system type: overlay
	I0124 17:45:30.920819  128080 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 17:45:30.920896  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:30.944263  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:45:30.944406  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0124 17:45:30.944464  128080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 17:45:31.081621  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 17:45:31.081701  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:31.104877  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:45:31.105013  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0124 17:45:31.105031  128080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 17:45:31.733469  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 17:45:31.080138432 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0124 17:45:31.733506  128080 machine.go:91] provisioned docker machine in 1.559368678s
	I0124 17:45:31.733517  128080 client.go:171] LocalClient.Create took 8.721340407s
	I0124 17:45:31.733536  128080 start.go:167] duration metric: libmachine.API.Create for "multinode-585561" took 8.721396631s
	I0124 17:45:31.733554  128080 start.go:300] post-start starting for "multinode-585561" (driver="docker")
	I0124 17:45:31.733561  128080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 17:45:31.733623  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 17:45:31.733681  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:31.756770  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:45:31.847792  128080 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 17:45:31.850310  128080 command_runner.go:130] > NAME="Ubuntu"
	I0124 17:45:31.850331  128080 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0124 17:45:31.850338  128080 command_runner.go:130] > ID=ubuntu
	I0124 17:45:31.850345  128080 command_runner.go:130] > ID_LIKE=debian
	I0124 17:45:31.850353  128080 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0124 17:45:31.850360  128080 command_runner.go:130] > VERSION_ID="20.04"
	I0124 17:45:31.850369  128080 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0124 17:45:31.850376  128080 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0124 17:45:31.850388  128080 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0124 17:45:31.850403  128080 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0124 17:45:31.850415  128080 command_runner.go:130] > VERSION_CODENAME=focal
	I0124 17:45:31.850421  128080 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0124 17:45:31.850490  128080 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 17:45:31.850520  128080 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 17:45:31.850538  128080 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 17:45:31.850549  128080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 17:45:31.850562  128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
	I0124 17:45:31.850662  128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
	I0124 17:45:31.850748  128080 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
	I0124 17:45:31.850760  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /etc/ssl/certs/101262.pem
	I0124 17:45:31.850850  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 17:45:31.857267  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
	I0124 17:45:31.874214  128080 start.go:303] post-start completed in 140.646195ms
	I0124 17:45:31.874580  128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561
	I0124 17:45:31.896951  128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
	I0124 17:45:31.897246  128080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 17:45:31.897299  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:31.919718  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:45:32.008735  128080 command_runner.go:130] > 23%!
	(MISSING)I0124 17:45:32.008828  128080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 17:45:32.012715  128080 command_runner.go:130] > 227G
	I0124 17:45:32.012738  128080 start.go:128] duration metric: createHost completed in 9.003538767s
	I0124 17:45:32.012746  128080 start.go:83] releasing machines lock for "multinode-585561", held for 9.003658553s
	I0124 17:45:32.012799  128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561
	I0124 17:45:32.034917  128080 ssh_runner.go:195] Run: cat /version.json
	I0124 17:45:32.034963  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:32.034984  128080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0124 17:45:32.035043  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:45:32.058262  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:45:32.058824  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:45:32.147599  128080 command_runner.go:130] > {"iso_version": "v1.28.0-1672850525-15541", "kicbase_version": "v0.0.36-1674164627-15541", "minikube_version": "v1.28.0", "commit": "09f10d7ce80c70492bae8df2b479c8e82a922c68"}
	I0124 17:45:32.147771  128080 ssh_runner.go:195] Run: systemctl --version
	I0124 17:45:32.174980  128080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0124 17:45:32.176573  128080 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0124 17:45:32.176606  128080 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0124 17:45:32.176675  128080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 17:45:32.180451  128080 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0124 17:45:32.180473  128080 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0124 17:45:32.180479  128080 command_runner.go:130] > Device: 34h/52d	Inode: 538245      Links: 1
	I0124 17:45:32.180485  128080 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0124 17:45:32.180491  128080 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0124 17:45:32.180495  128080 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0124 17:45:32.180518  128080 command_runner.go:130] > Change: 2023-01-24 17:29:01.213660493 +0000
	I0124 17:45:32.180529  128080 command_runner.go:130] >  Birth: -
	I0124 17:45:32.180755  128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 17:45:32.200290  128080 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 17:45:32.200403  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0124 17:45:32.207385  128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0124 17:45:32.219767  128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0124 17:45:32.235632  128080 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0124 17:45:32.235673  128080 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0124 17:45:32.235688  128080 start.go:472] detecting cgroup driver to use...
	I0124 17:45:32.235720  128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 17:45:32.235879  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 17:45:32.247792  128080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0124 17:45:32.247816  128080 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0124 17:45:32.248468  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0124 17:45:32.256045  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 17:45:32.264209  128080 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 17:45:32.264275  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 17:45:32.272067  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 17:45:32.279990  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 17:45:32.287462  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 17:45:32.294927  128080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 17:45:32.302022  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 17:45:32.309785  128080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 17:45:32.316018  128080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0124 17:45:32.316074  128080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 17:45:32.322494  128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:45:32.395373  128080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 17:45:32.476530  128080 start.go:472] detecting cgroup driver to use...
	I0124 17:45:32.476583  128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 17:45:32.476627  128080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 17:45:32.486372  128080 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0124 17:45:32.486396  128080 command_runner.go:130] > [Unit]
	I0124 17:45:32.486407  128080 command_runner.go:130] > Description=Docker Application Container Engine
	I0124 17:45:32.486416  128080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0124 17:45:32.486424  128080 command_runner.go:130] > BindsTo=containerd.service
	I0124 17:45:32.486433  128080 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0124 17:45:32.486441  128080 command_runner.go:130] > Wants=network-online.target
	I0124 17:45:32.486452  128080 command_runner.go:130] > Requires=docker.socket
	I0124 17:45:32.486459  128080 command_runner.go:130] > StartLimitBurst=3
	I0124 17:45:32.486477  128080 command_runner.go:130] > StartLimitIntervalSec=60
	I0124 17:45:32.486486  128080 command_runner.go:130] > [Service]
	I0124 17:45:32.486493  128080 command_runner.go:130] > Type=notify
	I0124 17:45:32.486503  128080 command_runner.go:130] > Restart=on-failure
	I0124 17:45:32.486521  128080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0124 17:45:32.486536  128080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0124 17:45:32.486549  128080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0124 17:45:32.486564  128080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0124 17:45:32.486574  128080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0124 17:45:32.486587  128080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0124 17:45:32.486602  128080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0124 17:45:32.486619  128080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0124 17:45:32.486633  128080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0124 17:45:32.486643  128080 command_runner.go:130] > ExecStart=
	I0124 17:45:32.486670  128080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0124 17:45:32.486706  128080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0124 17:45:32.486718  128080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0124 17:45:32.486732  128080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0124 17:45:32.486742  128080 command_runner.go:130] > LimitNOFILE=infinity
	I0124 17:45:32.486749  128080 command_runner.go:130] > LimitNPROC=infinity
	I0124 17:45:32.486759  128080 command_runner.go:130] > LimitCORE=infinity
	I0124 17:45:32.486768  128080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0124 17:45:32.486779  128080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0124 17:45:32.486786  128080 command_runner.go:130] > TasksMax=infinity
	I0124 17:45:32.486793  128080 command_runner.go:130] > TimeoutStartSec=0
	I0124 17:45:32.486800  128080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0124 17:45:32.486809  128080 command_runner.go:130] > Delegate=yes
	I0124 17:45:32.486819  128080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0124 17:45:32.486829  128080 command_runner.go:130] > KillMode=process
	I0124 17:45:32.486840  128080 command_runner.go:130] > [Install]
	I0124 17:45:32.486850  128080 command_runner.go:130] > WantedBy=multi-user.target
	I0124 17:45:32.487238  128080 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 17:45:32.487297  128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 17:45:32.497393  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 17:45:32.509897  128080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0124 17:45:32.509922  128080 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0124 17:45:32.510780  128080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 17:45:32.590227  128080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 17:45:32.681957  128080 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 17:45:32.681987  128080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 17:45:32.695875  128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:45:32.778228  128080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 17:45:32.974504  128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 17:45:33.052236  128080 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0124 17:45:33.052321  128080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 17:45:33.123624  128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 17:45:33.196557  128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:45:33.273352  128080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 17:45:33.284600  128080 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 17:45:33.284652  128080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 17:45:33.287650  128080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0124 17:45:33.287672  128080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0124 17:45:33.287681  128080 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0124 17:45:33.287693  128080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0124 17:45:33.287707  128080 command_runner.go:130] > Access: 2023-01-24 17:45:33.280295086 +0000
	I0124 17:45:33.287711  128080 command_runner.go:130] > Modify: 2023-01-24 17:45:33.280295086 +0000
	I0124 17:45:33.287716  128080 command_runner.go:130] > Change: 2023-01-24 17:45:33.280295086 +0000
	I0124 17:45:33.287723  128080 command_runner.go:130] >  Birth: -
	I0124 17:45:33.287734  128080 start.go:540] Will wait 60s for crictl version
	I0124 17:45:33.287779  128080 ssh_runner.go:195] Run: which crictl
	I0124 17:45:33.290385  128080 command_runner.go:130] > /usr/bin/crictl
	I0124 17:45:33.290428  128080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 17:45:33.380043  128080 command_runner.go:130] > Version:  0.1.0
	I0124 17:45:33.380067  128080 command_runner.go:130] > RuntimeName:  docker
	I0124 17:45:33.380076  128080 command_runner.go:130] > RuntimeVersion:  20.10.22
	I0124 17:45:33.380084  128080 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0124 17:45:33.381625  128080 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 17:45:33.381688  128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 17:45:33.408137  128080 command_runner.go:130] > 20.10.22
	I0124 17:45:33.408211  128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 17:45:33.435476  128080 command_runner.go:130] > 20.10.22
	I0124 17:45:33.438291  128080 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 17:45:33.438356  128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 17:45:33.461481  128080 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0124 17:45:33.464760  128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 17:45:33.474118  128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:45:33.474185  128080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 17:45:33.496239  128080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0124 17:45:33.496266  128080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0124 17:45:33.496275  128080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0124 17:45:33.496284  128080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0124 17:45:33.496292  128080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.4
	I0124 17:45:33.496300  128080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 17:45:33.496309  128080 command_runner.go:130] > registry.k8s.io/etcd:v3.3.8-0-gke.1
	I0124 17:45:33.496316  128080 command_runner.go:130] > registry.k8s.io/pause:test2
	I0124 17:45:33.496349  128080 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/coredns/coredns:v1.9.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 17:45:33.496361  128080 docker.go:636] registry.k8s.io/pause:3.9 wasn't preloaded
	I0124 17:45:33.496395  128080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0124 17:45:33.503133  128080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.4":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d","registry.k8s.io/coredns/coredns@sha256:b82e294de6be763f73ae71266c8f5466e7e03c69f3a1de96efd570284d35bb18":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:v3.3.8-0-gke.1":"sha256:2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b","registry.k8s.io/etcd@sha256:786ab1b91730b4171748511553abebaf73df1b5e8f1283d4bb5561728ae47fd5":"sha256:2a575b86cb35225ed31fa5
ee639ff14359a79b40982ce2bc6a5a36f642f9e97b"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c
4f63ed03c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:test2":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","registry.k8s.io/pause@sha256:0c17b6b35fafb2de159db2af2c0e40a4c1aa1a210bac1b65fbf807f105899146":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06"}}}
	I0124 17:45:33.503296  128080 ssh_runner.go:195] Run: which lz4
	I0124 17:45:33.506024  128080 command_runner.go:130] > /usr/bin/lz4
	I0124 17:45:33.506065  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0124 17:45:33.506137  128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0124 17:45:33.508774  128080 command_runner.go:130] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0124 17:45:33.508914  128080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0124 17:45:33.508943  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (441986565 bytes)
	I0124 17:45:34.228914  128080 docker.go:594] Took 0.722810 seconds to copy over tarball
	I0124 17:45:34.228974  128080 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0124 17:45:36.426644  128080 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197646751s)
	I0124 17:45:36.426668  128080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0124 17:45:36.487459  128080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0124 17:45:36.494297  128080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.4":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d","registry.k8s.io/coredns/coredns@sha256:b82e294de6be763f73ae71266c8f5466e7e03c69f3a1de96efd570284d35bb18":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:v3.3.8-0-gke.1":"sha256:2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b","registry.k8s.io/etcd@sha256:786ab1b91730b4171748511553abebaf73df1b5e8f1283d4bb5561728ae47fd5":"sha256:2a575b86cb35225ed31fa5
ee639ff14359a79b40982ce2bc6a5a36f642f9e97b"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c
4f63ed03c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:test2":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","registry.k8s.io/pause@sha256:0c17b6b35fafb2de159db2af2c0e40a4c1aa1a210bac1b65fbf807f105899146":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06"}}}
	I0124 17:45:36.494473  128080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2637 bytes)
	I0124 17:45:36.507161  128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:45:36.582112  128080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 17:45:39.494099  128080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.911944421s)
	I0124 17:45:39.494233  128080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 17:45:39.518359  128080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0124 17:45:39.518379  128080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0124 17:45:39.518384  128080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0124 17:45:39.518389  128080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0124 17:45:39.518394  128080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.4
	I0124 17:45:39.518399  128080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 17:45:39.518404  128080 command_runner.go:130] > registry.k8s.io/etcd:v3.3.8-0-gke.1
	I0124 17:45:39.518408  128080 command_runner.go:130] > registry.k8s.io/pause:test2
	I0124 17:45:39.518440  128080 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/coredns/coredns:v1.9.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 17:45:39.518450  128080 docker.go:636] registry.k8s.io/pause:3.9 wasn't preloaded
	I0124 17:45:39.518461  128080 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.26.1 registry.k8s.io/kube-controller-manager:v1.26.1 registry.k8s.io/kube-scheduler:v1.26.1 registry.k8s.io/kube-proxy:v1.26.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0124 17:45:39.520083  128080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.26.1
	I0124 17:45:39.520153  128080 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0124 17:45:39.520300  128080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 17:45:39.520380  128080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.26.1
	I0124 17:45:39.520398  128080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.26.1
	I0124 17:45:39.520428  128080 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:39.520383  128080 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:39.520477  128080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.26.1
	I0124 17:45:39.521147  128080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.26.1: Error: No such image: registry.k8s.io/kube-controller-manager:v1.26.1
	I0124 17:45:39.521176  128080 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error: No such image: registry.k8s.io/pause:3.9
	I0124 17:45:39.521211  128080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.26.1: Error: No such image: registry.k8s.io/kube-apiserver:v1.26.1
	I0124 17:45:39.521227  128080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.26.1: Error: No such image: registry.k8s.io/kube-scheduler:v1.26.1
	I0124 17:45:39.521280  128080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 17:45:39.521340  128080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.26.1: Error: No such image: registry.k8s.io/kube-proxy:v1.26.1
	I0124 17:45:39.521350  128080 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:39.521975  128080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.6-0: Error: No such image: registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:39.670545  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:39.670545  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.26.1
	I0124 17:45:39.675288  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:39.678647  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.26.1
	I0124 17:45:39.680658  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.26.1
	I0124 17:45:39.685067  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0124 17:45:39.697698  128080 command_runner.go:130] > sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
	I0124 17:45:39.702908  128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:39.703037  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.26.1
	I0124 17:45:39.705410  128080 cache_images.go:116] "registry.k8s.io/etcd:3.5.6-0" needs transfer: "registry.k8s.io/etcd:3.5.6-0" does not exist at hash "fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7" in container runtime
	I0124 17:45:39.705464  128080 docker.go:306] Removing image: registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:39.705501  128080 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:39.709780  128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:39.709836  128080 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I0124 17:45:39.709877  128080 docker.go:306] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:39.709921  128080 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:39.738381  128080 command_runner.go:130] > sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
	I0124 17:45:39.744815  128080 command_runner.go:130] > sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
	I0124 17:45:39.746542  128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/pause:3.9
	I0124 17:45:39.746614  128080 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0124 17:45:39.746651  128080 docker.go:306] Removing image: registry.k8s.io/pause:3.9
	I0124 17:45:39.746694  128080 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.9
	I0124 17:45:39.755440  128080 command_runner.go:130] > sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
	I0124 17:45:39.756584  128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:39.756646  128080 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0
	I0124 17:45:39.756670  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 -> /var/lib/minikube/images/etcd_3.5.6-0
	I0124 17:45:39.756747  128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.6-0
	I0124 17:45:39.760357  128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:39.760403  128080 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0124 17:45:39.760428  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 -> /var/lib/minikube/images/coredns_v1.9.3
	I0124 17:45:39.760485  128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3
	I0124 17:45:39.770551  128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/pause:3.9
	I0124 17:45:39.770610  128080 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0124 17:45:39.770635  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 -> /var/lib/minikube/images/pause_3.9
	I0124 17:45:39.770636  128080 command_runner.go:130] ! stat: cannot stat '/var/lib/minikube/images/etcd_3.5.6-0': No such file or directory
	I0124 17:45:39.770668  128080 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.6-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.6-0': No such file or directory
	I0124 17:45:39.770691  128080 command_runner.go:130] ! stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0124 17:45:39.770699  128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0124 17:45:39.770695  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 --> /var/lib/minikube/images/etcd_3.5.6-0 (102545408 bytes)
	I0124 17:45:39.770731  128080 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0124 17:45:39.770747  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I0124 17:45:39.775700  128080 command_runner.go:130] ! stat: cannot stat '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0124 17:45:39.776088  128080 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0124 17:45:39.776112  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0124 17:45:39.852449  128080 docker.go:273] Loading image: /var/lib/minikube/images/pause_3.9
	I0124 17:45:39.852485  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.9 | docker load"
	I0124 17:45:40.080634  128080 command_runner.go:130] > Loaded image: registry.k8s.io/pause:3.9
	I0124 17:45:40.081345  128080 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0124 17:45:40.081383  128080 docker.go:273] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I0124 17:45:40.081405  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.9.3 | docker load"
	I0124 17:45:40.205386  128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 17:45:40.708141  128080 command_runner.go:130] > Loaded image: registry.k8s.io/coredns/coredns:v1.9.3
	I0124 17:45:40.711557  128080 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I0124 17:45:40.711591  128080 docker.go:273] Loading image: /var/lib/minikube/images/etcd_3.5.6-0
	I0124 17:45:40.711605  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load"
	I0124 17:45:40.711601  128080 command_runner.go:130] > sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
	I0124 17:45:43.685698  128080 command_runner.go:130] > Loaded image: registry.k8s.io/etcd:3.5.6-0
	I0124 17:45:43.699960  128080 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load": (2.988329653s)
	I0124 17:45:43.699988  128080 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 from cache
	I0124 17:45:43.700012  128080 cache_images.go:123] Successfully loaded all cached images
	I0124 17:45:43.700016  128080 cache_images.go:92] LoadImages completed in 4.181537607s
	I0124 17:45:43.700071  128080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 17:45:43.766188  128080 command_runner.go:130] > cgroupfs
	I0124 17:45:43.766245  128080 cni.go:84] Creating CNI manager for ""
	I0124 17:45:43.766255  128080 cni.go:136] 1 nodes found, recommending kindnet
	I0124 17:45:43.766264  128080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 17:45:43.766287  128080 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 17:45:43.766439  128080 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-585561"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 17:45:43.766514  128080 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 17:45:43.766560  128080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0124 17:45:43.773089  128080 command_runner.go:130] > kubeadm
	I0124 17:45:43.773108  128080 command_runner.go:130] > kubectl
	I0124 17:45:43.773113  128080 command_runner.go:130] > kubelet
	I0124 17:45:43.773616  128080 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 17:45:43.773667  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 17:45:43.780165  128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0124 17:45:43.792314  128080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 17:45:43.804533  128080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0124 17:45:43.817082  128080 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0124 17:45:43.819995  128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 17:45:43.829201  128080 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.2
	I0124 17:45:43.829240  128080 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:43.829371  128080 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
	I0124 17:45:43.829405  128080 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
	I0124 17:45:43.829442  128080 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key
	I0124 17:45:43.829455  128080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt with IP's: []
	I0124 17:45:44.010311  128080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt ...
	I0124 17:45:44.010345  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt: {Name:mk2d37d083f05e3e37e8965ee60d661367fc2e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:44.010529  128080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key ...
	I0124 17:45:44.010542  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key: {Name:mka1ee5a4e8e936aba7297183fb01c3e0d44b829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:44.010612  128080 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041
	I0124 17:45:44.010628  128080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0124 17:45:44.233695  128080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041 ...
	I0124 17:45:44.233735  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041: {Name:mke4bb023677d665ceec185667069cfc8848a1d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:44.233896  128080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041 ...
	I0124 17:45:44.233907  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041: {Name:mkfcc331e089aeb8ebcd481d3bfe9c073ca672c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:44.233990  128080 certs.go:333] copying /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt
	I0124 17:45:44.234067  128080 certs.go:337] copying /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key
	I0124 17:45:44.234118  128080 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key
	I0124 17:45:44.234132  128080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt with IP's: []
	I0124 17:45:44.414136  128080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt ...
	I0124 17:45:44.414168  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt: {Name:mk42f04a02210acb9676eb8aa41efbf8f98dd76c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:44.414340  128080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key ...
	I0124 17:45:44.414351  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key: {Name:mkc04d12dc5e84768be7380c00c4420689c3c21f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:45:44.414421  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0124 17:45:44.414435  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0124 17:45:44.414443  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0124 17:45:44.414454  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0124 17:45:44.414463  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0124 17:45:44.414474  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0124 17:45:44.414482  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0124 17:45:44.414495  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0124 17:45:44.414542  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
	W0124 17:45:44.414574  128080 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
	I0124 17:45:44.414582  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
	I0124 17:45:44.414602  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
	I0124 17:45:44.414623  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
	I0124 17:45:44.414651  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
	I0124 17:45:44.414692  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
	I0124 17:45:44.414719  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /usr/share/ca-certificates/101262.pem
	I0124 17:45:44.414737  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:45:44.414747  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem -> /usr/share/ca-certificates/10126.pem
	I0124 17:45:44.415262  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 17:45:44.433928  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0124 17:45:44.451134  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 17:45:44.468595  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0124 17:45:44.485892  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 17:45:44.502849  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0124 17:45:44.519562  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 17:45:44.536175  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 17:45:44.553058  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
	I0124 17:45:44.570471  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 17:45:44.587228  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
	I0124 17:45:44.604268  128080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 17:45:44.616325  128080 ssh_runner.go:195] Run: openssl version
	I0124 17:45:44.620913  128080 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0124 17:45:44.621093  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 17:45:44.628037  128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:45:44.631038  128080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:45:44.631068  128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:45:44.631099  128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:45:44.635478  128080 command_runner.go:130] > b5213941
	I0124 17:45:44.635585  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 17:45:44.642526  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
	I0124 17:45:44.649442  128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
	I0124 17:45:44.652244  128080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
	I0124 17:45:44.652293  128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
	I0124 17:45:44.652338  128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
	I0124 17:45:44.656688  128080 command_runner.go:130] > 51391683
	I0124 17:45:44.656859  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
	I0124 17:45:44.663766  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
	I0124 17:45:44.670660  128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
	I0124 17:45:44.673361  128080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
	I0124 17:45:44.673430  128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
	I0124 17:45:44.673469  128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
	I0124 17:45:44.677678  128080 command_runner.go:130] > 3ec20f2e
	I0124 17:45:44.677802  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 17:45:44.684540  128080 kubeadm.go:401] StartCluster: {Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:45:44.684650  128080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 17:45:44.705063  128080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 17:45:44.711927  128080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0124 17:45:44.711955  128080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0124 17:45:44.711964  128080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0124 17:45:44.712018  128080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 17:45:44.718747  128080 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 17:45:44.718796  128080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 17:45:44.725397  128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0124 17:45:44.725416  128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0124 17:45:44.725423  128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0124 17:45:44.725431  128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 17:45:44.725456  128080 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 17:45:44.725483  128080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 17:45:44.769803  128080 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0124 17:45:44.769829  128080 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0124 17:45:44.769903  128080 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 17:45:44.769917  128080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0124 17:45:44.802975  128080 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0124 17:45:44.803004  128080 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0124 17:45:44.803098  128080 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1027-gcp
	I0124 17:45:44.803126  128080 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0124 17:45:44.803181  128080 kubeadm.go:322] OS: Linux
	I0124 17:45:44.803192  128080 command_runner.go:130] > OS: Linux
	I0124 17:45:44.803249  128080 kubeadm.go:322] CGROUPS_CPU: enabled
	I0124 17:45:44.803260  128080 command_runner.go:130] > CGROUPS_CPU: enabled
	I0124 17:45:44.803320  128080 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0124 17:45:44.803340  128080 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0124 17:45:44.803422  128080 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0124 17:45:44.803434  128080 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0124 17:45:44.803508  128080 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0124 17:45:44.803523  128080 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0124 17:45:44.803591  128080 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0124 17:45:44.803602  128080 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0124 17:45:44.803654  128080 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0124 17:45:44.803663  128080 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0124 17:45:44.803719  128080 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0124 17:45:44.803728  128080 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0124 17:45:44.803776  128080 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0124 17:45:44.803786  128080 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0124 17:45:44.803833  128080 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0124 17:45:44.803842  128080 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0124 17:45:44.867714  128080 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 17:45:44.867737  128080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 17:45:44.867845  128080 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 17:45:44.867869  128080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 17:45:44.867986  128080 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 17:45:44.867997  128080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 17:45:44.993524  128080 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 17:45:44.993567  128080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 17:45:44.997564  128080 out.go:204]   - Generating certificates and keys ...
	I0124 17:45:44.997635  128080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0124 17:45:44.997676  128080 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 17:45:44.997786  128080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0124 17:45:44.997807  128080 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 17:45:45.187295  128080 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0124 17:45:45.187328  128080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0124 17:45:45.283882  128080 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0124 17:45:45.283911  128080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0124 17:45:45.391350  128080 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0124 17:45:45.391374  128080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0124 17:45:45.606365  128080 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0124 17:45:45.606387  128080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0124 17:45:45.702268  128080 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0124 17:45:45.702297  128080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0124 17:45:45.702438  128080 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0124 17:45:45.702456  128080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0124 17:45:45.882805  128080 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0124 17:45:45.882832  128080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0124 17:45:45.882989  128080 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0124 17:45:45.883005  128080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0124 17:45:46.093046  128080 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0124 17:45:46.093078  128080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0124 17:45:46.159501  128080 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0124 17:45:46.159524  128080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0124 17:45:46.301027  128080 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0124 17:45:46.301055  128080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0124 17:45:46.301189  128080 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 17:45:46.301210  128080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 17:45:46.495395  128080 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 17:45:46.495441  128080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 17:45:46.652582  128080 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 17:45:46.652632  128080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 17:45:46.946653  128080 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 17:45:46.946697  128080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 17:45:47.133062  128080 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 17:45:47.133093  128080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 17:45:47.145766  128080 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 17:45:47.145792  128080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 17:45:47.146624  128080 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 17:45:47.146649  128080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 17:45:47.146695  128080 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0124 17:45:47.146723  128080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0124 17:45:47.229918  128080 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 17:45:47.229945  128080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 17:45:47.233377  128080 out.go:204]   - Booting up control plane ...
	I0124 17:45:47.233488  128080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 17:45:47.233513  128080 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 17:45:47.233641  128080 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 17:45:47.233657  128080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 17:45:47.234372  128080 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 17:45:47.234389  128080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 17:45:47.235031  128080 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 17:45:47.235048  128080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 17:45:47.236750  128080 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 17:45:47.236771  128080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 17:45:56.738655  128080 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.501901 seconds
	I0124 17:45:56.738717  128080 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.501901 seconds
	I0124 17:45:56.738924  128080 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0124 17:45:56.738947  128080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0124 17:45:56.751647  128080 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0124 17:45:56.751682  128080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0124 17:45:57.269253  128080 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0124 17:45:57.269281  128080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0124 17:45:57.269510  128080 kubeadm.go:322] [mark-control-plane] Marking the node multinode-585561 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0124 17:45:57.269568  128080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-585561 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0124 17:45:57.778295  128080 kubeadm.go:322] [bootstrap-token] Using token: klbn66.lbpub6z14ok3qnmd
	I0124 17:45:57.780154  128080 out.go:204]   - Configuring RBAC rules ...
	I0124 17:45:57.778369  128080 command_runner.go:130] > [bootstrap-token] Using token: klbn66.lbpub6z14ok3qnmd
	I0124 17:45:57.780311  128080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0124 17:45:57.780333  128080 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0124 17:45:57.783421  128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0124 17:45:57.783438  128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0124 17:45:57.789469  128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0124 17:45:57.789493  128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0124 17:45:57.791912  128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0124 17:45:57.791938  128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0124 17:45:57.795719  128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0124 17:45:57.795747  128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0124 17:45:57.798166  128080 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0124 17:45:57.798189  128080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0124 17:45:57.807852  128080 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0124 17:45:57.807927  128080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0124 17:45:58.002344  128080 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0124 17:45:58.002373  128080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0124 17:45:58.187864  128080 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0124 17:45:58.187890  128080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0124 17:45:58.189197  128080 kubeadm.go:322] 
	I0124 17:45:58.189319  128080 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0124 17:45:58.189336  128080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0124 17:45:58.189348  128080 kubeadm.go:322] 
	I0124 17:45:58.189467  128080 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0124 17:45:58.189495  128080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0124 17:45:58.189509  128080 kubeadm.go:322] 
	I0124 17:45:58.189551  128080 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0124 17:45:58.189561  128080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0124 17:45:58.189635  128080 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0124 17:45:58.189646  128080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0124 17:45:58.189731  128080 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0124 17:45:58.189747  128080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0124 17:45:58.189753  128080 kubeadm.go:322] 
	I0124 17:45:58.189836  128080 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0124 17:45:58.189847  128080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0124 17:45:58.189852  128080 kubeadm.go:322] 
	I0124 17:45:58.189916  128080 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0124 17:45:58.189926  128080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0124 17:45:58.189930  128080 kubeadm.go:322] 
	I0124 17:45:58.190004  128080 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0124 17:45:58.190015  128080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0124 17:45:58.190118  128080 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0124 17:45:58.190135  128080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0124 17:45:58.190221  128080 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0124 17:45:58.190228  128080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0124 17:45:58.190233  128080 kubeadm.go:322] 
	I0124 17:45:58.190361  128080 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0124 17:45:58.190374  128080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0124 17:45:58.190484  128080 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0124 17:45:58.190496  128080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0124 17:45:58.190501  128080 kubeadm.go:322] 
	I0124 17:45:58.190602  128080 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
	I0124 17:45:58.190613  128080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
	I0124 17:45:58.190738  128080 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 \
	I0124 17:45:58.190750  128080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 \
	I0124 17:45:58.190776  128080 kubeadm.go:322] 	--control-plane 
	I0124 17:45:58.190782  128080 command_runner.go:130] > 	--control-plane 
	I0124 17:45:58.190793  128080 kubeadm.go:322] 
	I0124 17:45:58.190902  128080 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0124 17:45:58.190912  128080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0124 17:45:58.190918  128080 kubeadm.go:322] 
	I0124 17:45:58.191017  128080 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
	I0124 17:45:58.191028  128080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
	I0124 17:45:58.191152  128080 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 
	I0124 17:45:58.191161  128080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 
	I0124 17:45:58.237280  128080 kubeadm.go:322] W0124 17:45:44.762284    1917 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0124 17:45:58.237309  128080 command_runner.go:130] ! W0124 17:45:44.762284    1917 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0124 17:45:58.237600  128080 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0124 17:45:58.237636  128080 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0124 17:45:58.237795  128080 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 17:45:58.237810  128080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 17:45:58.237829  128080 cni.go:84] Creating CNI manager for ""
	I0124 17:45:58.237852  128080 cni.go:136] 1 nodes found, recommending kindnet
	I0124 17:45:58.239768  128080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0124 17:45:58.241228  128080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0124 17:45:58.245464  128080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0124 17:45:58.245490  128080 command_runner.go:130] >   Size: 2828728   	Blocks: 5536       IO Block: 4096   regular file
	I0124 17:45:58.245501  128080 command_runner.go:130] > Device: 34h/52d	Inode: 535835      Links: 1
	I0124 17:45:58.245512  128080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0124 17:45:58.245526  128080 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0124 17:45:58.245535  128080 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0124 17:45:58.245542  128080 command_runner.go:130] > Change: 2023-01-24 17:29:00.473607792 +0000
	I0124 17:45:58.245548  128080 command_runner.go:130] >  Birth: -
	I0124 17:45:58.245604  128080 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0124 17:45:58.245613  128080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0124 17:45:58.260810  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0124 17:45:58.922634  128080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0124 17:45:58.929361  128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0124 17:45:58.935073  128080 command_runner.go:130] > serviceaccount/kindnet created
	I0124 17:45:58.943119  128080 command_runner.go:130] > daemonset.apps/kindnet created
	I0124 17:45:58.946499  128080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0124 17:45:58.946574  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:45:58.946595  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=6b2c057f52b907b52814c670e5ac26b018123ade minikube.k8s.io/name=multinode-585561 minikube.k8s.io/updated_at=2023_01_24T17_45_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:45:59.039306  128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0124 17:45:59.043121  128080 command_runner.go:130] > -16
	I0124 17:45:59.043156  128080 ops.go:34] apiserver oom_adj: -16
	I0124 17:45:59.043251  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:45:59.054574  128080 command_runner.go:130] > node/multinode-585561 labeled
	I0124 17:45:59.106915  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:45:59.607690  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:45:59.673704  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:00.107350  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:00.171767  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:00.607314  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:00.671479  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:01.108138  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:01.172113  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:01.607749  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:01.668888  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:02.107875  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:02.168102  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:02.607325  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:02.668589  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:03.108046  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:03.168986  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:03.608067  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:03.670289  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:04.107967  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:04.168901  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:04.607630  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:04.671033  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:05.107652  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:05.168730  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:05.607895  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:05.670809  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:06.107366  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:06.174207  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:06.607862  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:06.670424  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:07.108081  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:07.169397  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:07.607437  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:07.668571  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:08.107958  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:08.172977  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:08.607445  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:08.745915  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:09.107627  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:09.173030  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:09.607625  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:09.671485  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:10.108053  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:10.175202  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:10.607931  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:10.677701  128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0124 17:46:11.107282  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0124 17:46:11.237804  128080 command_runner.go:130] > NAME      SECRETS   AGE
	I0124 17:46:11.237827  128080 command_runner.go:130] > default   0         1s
	I0124 17:46:11.241506  128080 kubeadm.go:1073] duration metric: took 12.294989418s to wait for elevateKubeSystemPrivileges.
	I0124 17:46:11.241541  128080 kubeadm.go:403] StartCluster complete in 26.557007789s
	I0124 17:46:11.241565  128080 settings.go:142] acquiring lock: {Name:mkad36df43ddb11f4b3b585fb658d2ead0b2161f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:46:11.241636  128080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:46:11.242500  128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/kubeconfig: {Name:mk90224603185dd0b148bed729b1c974f808bca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:46:11.243056  128080 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:46:11.243366  128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 17:46:11.244092  128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0124 17:46:11.244103  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:11.244113  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:11.244122  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:11.244357  128080 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0124 17:46:11.244379  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0124 17:46:11.244418  128080 addons.go:65] Setting storage-provisioner=true in profile "multinode-585561"
	I0124 17:46:11.244434  128080 addons.go:227] Setting addon storage-provisioner=true in "multinode-585561"
	W0124 17:46:11.244441  128080 addons.go:236] addon storage-provisioner should already be in state true
	I0124 17:46:11.244469  128080 cert_rotation.go:137] Starting client certificate rotation controller
	I0124 17:46:11.244493  128080 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:46:11.244670  128080 addons.go:65] Setting default-storageclass=true in profile "multinode-585561"
	I0124 17:46:11.244686  128080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-585561"
	I0124 17:46:11.244761  128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:46:11.244957  128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:46:11.245001  128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:46:11.256720  128080 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0124 17:46:11.256748  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:11.256759  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:11.256768  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:11.256777  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:11.256785  128080 round_trippers.go:580]     Content-Length: 291
	I0124 17:46:11.256794  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:11 GMT
	I0124 17:46:11.256803  128080 round_trippers.go:580]     Audit-Id: 5cdaa95e-5c9a-406c-8dda-598965a63aeb
	I0124 17:46:11.256810  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:11.256840  128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"350","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0124 17:46:11.257342  128080 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"350","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0124 17:46:11.257395  128080 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0124 17:46:11.257401  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:11.257412  128080 round_trippers.go:473]     Content-Type: application/json
	I0124 17:46:11.257422  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:11.257429  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:11.265014  128080 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0124 17:46:11.265043  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:11.265054  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:11 GMT
	I0124 17:46:11.265070  128080 round_trippers.go:580]     Audit-Id: f524da67-f4f6-4219-864c-9bef6dbb6092
	I0124 17:46:11.265082  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:11.265096  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:11.265108  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:11.265121  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:11.265134  128080 round_trippers.go:580]     Content-Length: 291
	I0124 17:46:11.265164  128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"353","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0124 17:46:11.276945  128080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 17:46:11.279090  128080 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 17:46:11.279120  128080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0124 17:46:11.279192  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:46:11.284897  128080 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:46:11.285238  128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 17:46:11.285653  128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0124 17:46:11.285665  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:11.285676  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:11.285685  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:11.307656  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:46:11.338184  128080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0124 17:46:11.338214  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:11.338226  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:11 GMT
	I0124 17:46:11.338234  128080 round_trippers.go:580]     Audit-Id: bf437a79-0171-4414-adb0-e7ee7e8c06e6
	I0124 17:46:11.338241  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:11.338249  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:11.338260  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:11.338268  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:11.338276  128080 round_trippers.go:580]     Content-Length: 109
	I0124 17:46:11.338306  128080 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"357"},"items":[]}
	I0124 17:46:11.338660  128080 addons.go:227] Setting addon default-storageclass=true in "multinode-585561"
	W0124 17:46:11.338690  128080 addons.go:236] addon default-storageclass should already be in state true
	I0124 17:46:11.338721  128080 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:46:11.339224  128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:46:11.369188  128080 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0124 17:46:11.369210  128080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0124 17:46:11.369276  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:46:11.398805  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:46:11.445597  128080 command_runner.go:130] > apiVersion: v1
	I0124 17:46:11.445622  128080 command_runner.go:130] > data:
	I0124 17:46:11.445629  128080 command_runner.go:130] >   Corefile: |
	I0124 17:46:11.445635  128080 command_runner.go:130] >     .:53 {
	I0124 17:46:11.445641  128080 command_runner.go:130] >         errors
	I0124 17:46:11.445649  128080 command_runner.go:130] >         health {
	I0124 17:46:11.445655  128080 command_runner.go:130] >            lameduck 5s
	I0124 17:46:11.445662  128080 command_runner.go:130] >         }
	I0124 17:46:11.445668  128080 command_runner.go:130] >         ready
	I0124 17:46:11.445681  128080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0124 17:46:11.445691  128080 command_runner.go:130] >            pods insecure
	I0124 17:46:11.445700  128080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0124 17:46:11.445711  128080 command_runner.go:130] >            ttl 30
	I0124 17:46:11.445719  128080 command_runner.go:130] >         }
	I0124 17:46:11.445726  128080 command_runner.go:130] >         prometheus :9153
	I0124 17:46:11.445733  128080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0124 17:46:11.445745  128080 command_runner.go:130] >            max_concurrent 1000
	I0124 17:46:11.445754  128080 command_runner.go:130] >         }
	I0124 17:46:11.445761  128080 command_runner.go:130] >         cache 30
	I0124 17:46:11.445776  128080 command_runner.go:130] >         loop
	I0124 17:46:11.445786  128080 command_runner.go:130] >         reload
	I0124 17:46:11.445792  128080 command_runner.go:130] >         loadbalance
	I0124 17:46:11.445800  128080 command_runner.go:130] >     }
	I0124 17:46:11.445809  128080 command_runner.go:130] > kind: ConfigMap
	I0124 17:46:11.445819  128080 command_runner.go:130] > metadata:
	I0124 17:46:11.445830  128080 command_runner.go:130] >   creationTimestamp: "2023-01-24T17:45:57Z"
	I0124 17:46:11.445839  128080 command_runner.go:130] >   name: coredns
	I0124 17:46:11.445852  128080 command_runner.go:130] >   namespace: kube-system
	I0124 17:46:11.445862  128080 command_runner.go:130] >   resourceVersion: "234"
	I0124 17:46:11.445870  128080 command_runner.go:130] >   uid: 6a251b5a-c4e7-4c33-ac27-89bc13f50707
	I0124 17:46:11.449387  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0124 17:46:11.552819  128080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0124 17:46:11.555718  128080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 17:46:11.766086  128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0124 17:46:11.766110  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:11.766122  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:11.766131  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:11.839048  128080 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
	I0124 17:46:11.839079  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:11.839091  128080 round_trippers.go:580]     Content-Length: 291
	I0124 17:46:11.839100  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:11 GMT
	I0124 17:46:11.839110  128080 round_trippers.go:580]     Audit-Id: 5654b5dd-e997-4a21-aa30-931932c7b55f
	I0124 17:46:11.839119  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:11.839143  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:11.839152  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:11.839161  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:11.839203  128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"362","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0124 17:46:11.839316  128080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-585561" context rescaled to 1 replicas
	I0124 17:46:11.839348  128080 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 17:46:11.841837  128080 out.go:177] * Verifying Kubernetes components...
	I0124 17:46:11.843840  128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 17:46:12.053650  128080 command_runner.go:130] > configmap/coredns replaced
	I0124 17:46:12.058269  128080 start.go:908] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0124 17:46:12.258166  128080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0124 17:46:12.258295  128080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0124 17:46:12.258317  128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0124 17:46:12.261200  128080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0124 17:46:12.270920  128080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0124 17:46:12.278019  128080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0124 17:46:12.341770  128080 command_runner.go:130] > pod/storage-provisioner created
	I0124 17:46:12.349060  128080 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0124 17:46:12.347563  128080 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:46:12.350819  128080 addons.go:488] enableAddons completed in 1.106456266s
	I0124 17:46:12.354715  128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 17:46:12.355351  128080 node_ready.go:35] waiting up to 6m0s for node "multinode-585561" to be "Ready" ...
	I0124 17:46:12.355428  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:12.355439  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:12.355450  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:12.355463  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:12.357733  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:12.357777  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:12.357789  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:12.357803  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:12.357819  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:12.357831  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:12.357846  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:12 GMT
	I0124 17:46:12.357859  128080 round_trippers.go:580]     Audit-Id: da4b8883-0a53-42f5-bd59-9f5b92eb5e37
	I0124 17:46:12.358027  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:12.358647  128080 node_ready.go:49] node "multinode-585561" has status "Ready":"True"
	I0124 17:46:12.358664  128080 node_ready.go:38] duration metric: took 3.293273ms waiting for node "multinode-585561" to be "Ready" ...
	I0124 17:46:12.358674  128080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0124 17:46:12.358752  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0124 17:46:12.358769  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:12.358780  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:12.358790  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:12.362209  128080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0124 17:46:12.362234  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:12.362242  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:12.362247  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:12.362253  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:12.362258  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:12.362266  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:12 GMT
	I0124 17:46:12.362272  128080 round_trippers.go:580]     Audit-Id: 75b8da56-ebb4-4006-b382-354f188a10a6
	I0124 17:46:12.362816  128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"375"},"items":[{"metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749
d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60456 chars]
	I0124 17:46:12.366111  128080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-5748b" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:12.366188  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
	I0124 17:46:12.366198  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:12.366206  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:12.366216  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:12.371417  128080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0124 17:46:12.371439  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:12.371448  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:12.371456  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:12.371468  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:12 GMT
	I0124 17:46:12.371477  128080 round_trippers.go:580]     Audit-Id: 5a557803-3f00-4921-87e1-53bf1bbbb7b8
	I0124 17:46:12.371486  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:12.371499  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:12.371939  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0124 17:46:12.372424  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:12.372437  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:12.372445  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:12.372451  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:12.374387  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:12.374410  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:12.374420  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:12.374430  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:12 GMT
	I0124 17:46:12.374442  128080 round_trippers.go:580]     Audit-Id: 68d67445-5ac8-4709-a6d2-1ada528dd390
	I0124 17:46:12.374458  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:12.374467  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:12.374477  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:12.374582  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:12.875303  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
	I0124 17:46:12.875329  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:12.875341  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:12.875351  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:12.878007  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:12.878035  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:12.878046  128080 round_trippers.go:580]     Audit-Id: cc041326-b604-4316-9e60-4c454a29d35d
	I0124 17:46:12.878055  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:12.878064  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:12.878074  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:12.878086  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:12.878095  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:12 GMT
	I0124 17:46:12.878225  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0124 17:46:12.878682  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:12.878697  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:12.878707  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:12.878716  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:12.880865  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:12.880894  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:12.880903  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:12 GMT
	I0124 17:46:12.880911  128080 round_trippers.go:580]     Audit-Id: 5228e599-067b-4a85-af43-7a0fc70a71cf
	I0124 17:46:12.880919  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:12.880929  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:12.880940  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:12.880951  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:12.881066  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:13.375648  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
	I0124 17:46:13.375675  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:13.375688  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:13.375697  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:13.378104  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:13.378128  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:13.378138  128080 round_trippers.go:580]     Audit-Id: 8ca965c5-21d1-42a6-a678-42536fcf7366
	I0124 17:46:13.378145  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:13.378153  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:13.378160  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:13.378168  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:13.378181  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:13 GMT
	I0124 17:46:13.378281  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0124 17:46:13.378712  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:13.378723  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:13.378730  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:13.378736  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:13.380607  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:13.380654  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:13.380673  128080 round_trippers.go:580]     Audit-Id: 202813aa-db60-40e2-a2dc-1204bcc7918e
	I0124 17:46:13.380693  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:13.380709  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:13.380718  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:13.380728  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:13.380740  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:13 GMT
	I0124 17:46:13.380868  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:13.875422  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
	I0124 17:46:13.875446  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:13.875458  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:13.875469  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:13.878282  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:13.878310  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:13.878321  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:13 GMT
	I0124 17:46:13.878328  128080 round_trippers.go:580]     Audit-Id: cb1b7f8f-87b9-4009-ae78-360e70b61905
	I0124 17:46:13.878336  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:13.878344  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:13.878352  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:13.878361  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:13.878516  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0124 17:46:13.879120  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:13.879138  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:13.879150  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:13.879159  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:13.881242  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:13.881261  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:13.881271  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:13.881278  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:13.881287  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:13 GMT
	I0124 17:46:13.881295  128080 round_trippers.go:580]     Audit-Id: 72db6b49-460f-4450-92db-7e2979a109c9
	I0124 17:46:13.881302  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:13.881312  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:13.881434  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:14.376088  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
	I0124 17:46:14.376111  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:14.376129  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:14.376138  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:14.378844  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:14.378876  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:14.378887  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:14.378897  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:14.378905  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:14.378913  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:14.378922  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:14 GMT
	I0124 17:46:14.378931  128080 round_trippers.go:580]     Audit-Id: ae4240bb-f051-4daa-82ef-fc98fcfe8084
	I0124 17:46:14.379058  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0124 17:46:14.379629  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:14.379645  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:14.379656  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:14.379667  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:14.381887  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:14.381909  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:14.381935  128080 round_trippers.go:580]     Audit-Id: 38667b85-4684-4d19-9d49-5d98172087be
	I0124 17:46:14.381948  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:14.381960  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:14.381972  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:14.381983  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:14.381995  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:14 GMT
	I0124 17:46:14.382131  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:14.382508  128080 pod_ready.go:102] pod "coredns-787d4945fb-5748b" in "kube-system" namespace has status "Ready":"False"
	I0124 17:46:14.875279  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
	I0124 17:46:14.875301  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:14.875311  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:14.875321  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:14.877630  128080 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0124 17:46:14.877650  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:14.877657  128080 round_trippers.go:580]     Audit-Id: 88cafee3-f464-4b59-84f8-ef367572e7ad
	I0124 17:46:14.877663  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:14.877668  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:14.877679  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:14.877685  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:14.877693  128080 round_trippers.go:580]     Content-Length: 216
	I0124 17:46:14.877700  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:14 GMT
	I0124 17:46:14.877725  128080 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-5748b\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-5748b","kind":"pods"},"code":404}
	I0124 17:46:14.877889  128080 pod_ready.go:97] error getting pod "coredns-787d4945fb-5748b" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-5748b" not found
	I0124 17:46:14.877906  128080 pod_ready.go:81] duration metric: took 2.511771691s waiting for pod "coredns-787d4945fb-5748b" in "kube-system" namespace to be "Ready" ...
	E0124 17:46:14.877916  128080 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-5748b" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-5748b" not found
	I0124 17:46:14.877930  128080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:14.877990  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:14.877998  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:14.878005  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:14.878013  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:14.880246  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:14.880266  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:14.880274  128080 round_trippers.go:580]     Audit-Id: 940ac1ec-8737-4632-a5a2-06c8c4e3ced3
	I0124 17:46:14.880282  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:14.880294  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:14.880302  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:14.880310  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:14.880318  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:14 GMT
	I0124 17:46:14.880441  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:14.881059  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:14.881082  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:14.881096  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:14.881104  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:14.883101  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:14.883121  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:14.883130  128080 round_trippers.go:580]     Audit-Id: ea1669fc-d5aa-4c4c-bdd2-4af6705d6d4f
	I0124 17:46:14.883138  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:14.883146  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:14.883155  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:14.883167  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:14.883182  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:14 GMT
	I0124 17:46:14.883289  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:15.384293  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:15.384311  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:15.384320  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:15.384326  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:15.386563  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:15.386587  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:15.386595  128080 round_trippers.go:580]     Audit-Id: c413c0be-794a-4ea6-b502-4dc6f6964015
	I0124 17:46:15.386602  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:15.386611  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:15.386620  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:15.386633  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:15.386644  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:15 GMT
	I0124 17:46:15.386768  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:15.387210  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:15.387224  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:15.387231  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:15.387237  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:15.389055  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:15.389077  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:15.389086  128080 round_trippers.go:580]     Audit-Id: af3c6647-ee30-4bee-b064-6474453ab36c
	I0124 17:46:15.389095  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:15.389103  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:15.389108  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:15.389114  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:15.389122  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:15 GMT
	I0124 17:46:15.389247  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:15.883815  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:15.883835  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:15.883844  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:15.883850  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:15.886020  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:15.886046  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:15.886056  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:15.886064  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:15.886072  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:15.886079  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:15.886087  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:15 GMT
	I0124 17:46:15.886096  128080 round_trippers.go:580]     Audit-Id: a44c8b78-d512-4ca7-9d90-73d0b45e2a7d
	I0124 17:46:15.886187  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:15.886627  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:15.886642  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:15.886650  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:15.886656  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:15.888435  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:15.888459  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:15.888466  128080 round_trippers.go:580]     Audit-Id: 08e0ef81-d9ec-409f-9c3d-ec2870dfa238
	I0124 17:46:15.888472  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:15.888477  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:15.888482  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:15.888487  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:15.888492  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:15 GMT
	I0124 17:46:15.888685  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:16.384141  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:16.384161  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:16.384169  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:16.384176  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:16.386375  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:16.386401  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:16.386410  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:16.386417  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:16.386423  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:16 GMT
	I0124 17:46:16.386431  128080 round_trippers.go:580]     Audit-Id: 3a22b1f2-0462-49ac-aa98-6495658c05b5
	I0124 17:46:16.386437  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:16.386445  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:16.386552  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:16.387027  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:16.387042  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:16.387049  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:16.387055  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:16.388772  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:16.388791  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:16.388798  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:16.388805  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:16 GMT
	I0124 17:46:16.388810  128080 round_trippers.go:580]     Audit-Id: 4ca7226d-76d9-4a77-8734-861d3a15195d
	I0124 17:46:16.388826  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:16.388839  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:16.388850  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:16.388965  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:16.884670  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:16.884690  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:16.884699  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:16.884706  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:16.886775  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:16.886805  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:16.886816  128080 round_trippers.go:580]     Audit-Id: 55765fb3-15d2-4efa-bb09-0de3f5ab5931
	I0124 17:46:16.886825  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:16.886833  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:16.886840  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:16.886849  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:16.886859  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:16 GMT
	I0124 17:46:16.886966  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:16.887385  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:16.887397  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:16.887404  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:16.887410  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:16.889089  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:16.889107  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:16.889113  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:16.889119  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:16.889124  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:16 GMT
	I0124 17:46:16.889129  128080 round_trippers.go:580]     Audit-Id: cbe9bdce-a7d6-4f16-913a-c9ecf43e8706
	I0124 17:46:16.889134  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:16.889140  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:16.889278  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:16.889574  128080 pod_ready.go:102] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"False"
	I0124 17:46:17.383859  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:17.383880  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:17.383888  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:17.383894  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:17.386215  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:17.386247  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:17.386258  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:17 GMT
	I0124 17:46:17.386267  128080 round_trippers.go:580]     Audit-Id: 8c956872-b5ca-4ed1-a5a5-dc56b5e38ec7
	I0124 17:46:17.386293  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:17.386305  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:17.386315  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:17.386324  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:17.386475  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:17.386940  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:17.386953  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:17.386960  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:17.386966  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:17.388766  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:17.388788  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:17.388797  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:17.388806  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:17.388815  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:17.388824  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:17 GMT
	I0124 17:46:17.388841  128080 round_trippers.go:580]     Audit-Id: ee94f717-d9ca-40a2-8215-1f90f0ac56c4
	I0124 17:46:17.388850  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:17.388953  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:17.884657  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:17.884681  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:17.884689  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:17.884695  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:17.886917  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:17.886944  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:17.886954  128080 round_trippers.go:580]     Audit-Id: 478d2475-62db-43cc-91af-13026422d22e
	I0124 17:46:17.886963  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:17.886972  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:17.886980  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:17.886989  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:17.886994  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:17 GMT
	I0124 17:46:17.887077  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:17.887512  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:17.887524  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:17.887531  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:17.887537  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:17.889214  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:17.889235  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:17.889244  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:17 GMT
	I0124 17:46:17.889252  128080 round_trippers.go:580]     Audit-Id: d9767526-9706-4906-9274-29b305037d36
	I0124 17:46:17.889260  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:17.889272  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:17.889283  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:17.889292  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:17.889392  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:18.383995  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:18.384017  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:18.384025  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:18.384031  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:18.386312  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:18.386342  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:18.386352  128080 round_trippers.go:580]     Audit-Id: b3169cd9-91e5-4b7d-b0fb-39d25ed6eaea
	I0124 17:46:18.386360  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:18.386372  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:18.386388  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:18.386397  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:18.386408  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:18 GMT
	I0124 17:46:18.386517  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:18.387048  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:18.387065  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:18.387077  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:18.387092  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:18.388772  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:18.388793  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:18.388802  128080 round_trippers.go:580]     Audit-Id: 07d8bef4-63b1-41fc-bc08-a451252319c0
	I0124 17:46:18.388810  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:18.388815  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:18.388820  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:18.388829  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:18.388837  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:18 GMT
	I0124 17:46:18.388943  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:18.884607  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:18.884627  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:18.884636  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:18.884642  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:18.886715  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:18.886734  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:18.886741  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:18 GMT
	I0124 17:46:18.886747  128080 round_trippers.go:580]     Audit-Id: 3ab135f8-9f0b-4db7-b22a-738f29bccccd
	I0124 17:46:18.886752  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:18.886757  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:18.886762  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:18.886767  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:18.886897  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:18.887336  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:18.887347  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:18.887355  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:18.887361  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:18.889083  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:18.889105  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:18.889114  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:18 GMT
	I0124 17:46:18.889126  128080 round_trippers.go:580]     Audit-Id: 0b42a62f-9b39-4c3b-bb5a-75c434690c59
	I0124 17:46:18.889135  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:18.889145  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:18.889154  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:18.889160  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:18.889257  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:19.383850  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:19.383868  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:19.383877  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:19.383883  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:19.386512  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:19.386534  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:19.386542  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:19.386549  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:19.386558  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:19 GMT
	I0124 17:46:19.386575  128080 round_trippers.go:580]     Audit-Id: 0fc11ad0-729d-44ed-b7d1-2b8906765599
	I0124 17:46:19.386583  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:19.386593  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:19.386715  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:19.387251  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:19.387265  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:19.387274  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:19.387287  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:19.389059  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:19.389081  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:19.389090  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:19.389099  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:19.389107  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:19.389116  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:19 GMT
	I0124 17:46:19.389128  128080 round_trippers.go:580]     Audit-Id: 3ba4caea-3e2b-4e5b-87c0-5b93f9b4aa61
	I0124 17:46:19.389139  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:19.389235  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:19.389559  128080 pod_ready.go:102] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"False"
	I0124 17:46:19.883802  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:19.883834  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:19.883850  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:19.883860  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:19.886156  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:19.886182  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:19.886193  128080 round_trippers.go:580]     Audit-Id: 347a1eae-e9ca-4a14-b52d-28006cac3924
	I0124 17:46:19.886202  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:19.886209  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:19.886231  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:19.886243  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:19.886256  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:19 GMT
	I0124 17:46:19.886364  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0124 17:46:19.886833  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:19.886847  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:19.886854  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:19.886860  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:19.888555  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:19.888577  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:19.888585  128080 round_trippers.go:580]     Audit-Id: cba8f115-4fb9-48ad-8247-51bda1b13d4d
	I0124 17:46:19.888591  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:19.888596  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:19.888602  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:19.888608  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:19.888620  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:19 GMT
	I0124 17:46:19.888738  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:20.384344  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:20.384364  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.384373  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.384378  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.386534  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:20.386557  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.386567  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.386577  128080 round_trippers.go:580]     Audit-Id: 6e617859-d68f-497e-ac28-252c7bd34b25
	I0124 17:46:20.386586  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.386594  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.386600  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.386608  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.386729  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5901 chars]
	I0124 17:46:20.387203  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:20.387216  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.387223  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.387230  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.388896  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.388912  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.388918  128080 round_trippers.go:580]     Audit-Id: 29e61c99-e2b2-4074-804e-dc3c5622caa5
	I0124 17:46:20.388928  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.388939  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.388946  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.388954  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.388966  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.389092  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:20.389413  128080 pod_ready.go:92] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:20.389433  128080 pod_ready.go:81] duration metric: took 5.51149317s waiting for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.389444  128080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.389488  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-585561
	I0124 17:46:20.389495  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.389502  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.389512  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.391285  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.391306  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.391315  128080 round_trippers.go:580]     Audit-Id: 08b89957-cd1b-435b-9368-d003f69af723
	I0124 17:46:20.391324  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.391336  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.391347  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.391362  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.391378  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.391465  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-585561","namespace":"kube-system","uid":"e90a4912-09cf-4017-b275-36e5cbaf8fb7","resourceVersion":"307","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.mirror":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.seen":"2023-01-24T17:45:58.071793905Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5806 chars]
	I0124 17:46:20.391863  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:20.391876  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.391882  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.391891  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.393484  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.393504  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.393513  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.393522  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.393533  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.393544  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.393555  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.393564  128080 round_trippers.go:580]     Audit-Id: 9d517b27-925a-470b-88a5-106c5a23187a
	I0124 17:46:20.393690  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:20.393955  128080 pod_ready.go:92] pod "etcd-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:20.393965  128080 pod_ready.go:81] duration metric: took 4.514247ms waiting for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.393976  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.394011  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-585561
	I0124 17:46:20.394018  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.394025  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.394031  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.395611  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.395630  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.395641  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.395650  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.395659  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.395664  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.395674  128080 round_trippers.go:580]     Audit-Id: 92a0a131-6edc-4951-9b5e-e6a42480379b
	I0124 17:46:20.395680  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.395778  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-585561","namespace":"kube-system","uid":"b6111d69-e414-4456-b981-c45749f2bc69","resourceVersion":"270","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.mirror":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.seen":"2023-01-24T17:45:58.071829413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0124 17:46:20.396163  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:20.396176  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.396188  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.396197  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.397602  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.397620  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.397631  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.397638  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.397646  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.397654  128080 round_trippers.go:580]     Audit-Id: d58babc9-9368-4cc7-857b-5e9ec5fa997c
	I0124 17:46:20.397664  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.397676  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.397763  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:20.398120  128080 pod_ready.go:92] pod "kube-apiserver-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:20.398135  128080 pod_ready.go:81] duration metric: took 4.153168ms waiting for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.398145  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.398196  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-585561
	I0124 17:46:20.398207  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.398217  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.398229  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.399799  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.399825  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.399834  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.399840  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.399846  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.399854  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.399860  128080 round_trippers.go:580]     Audit-Id: 13064886-e712-451b-8ec0-faadd272e681
	I0124 17:46:20.399867  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.400012  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-585561","namespace":"kube-system","uid":"64983300-9251-4324-9c8a-e0ff30ae4238","resourceVersion":"385","creationTimestamp":"2023-01-24T17:45:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.mirror":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.seen":"2023-01-24T17:45:47.607485043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0124 17:46:20.400426  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:20.400439  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.400445  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.400451  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.401834  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.401853  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.401864  128080 round_trippers.go:580]     Audit-Id: 1d611837-4e9a-4ab3-a31c-5611ca6544be
	I0124 17:46:20.401872  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.401877  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.401882  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.401888  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.401897  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.401984  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:20.402241  128080 pod_ready.go:92] pod "kube-controller-manager-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:20.402247  128080 pod_ready.go:81] duration metric: took 4.093647ms waiting for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.402255  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.402287  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrvx
	I0124 17:46:20.402290  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.402297  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.402303  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.403747  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.403765  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.403776  128080 round_trippers.go:580]     Audit-Id: 34077473-fdc5-4cd9-a206-58d2e6eb561d
	I0124 17:46:20.403784  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.403797  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.403813  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.403824  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.403831  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.403943  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wxrvx","generateName":"kube-proxy-","namespace":"kube-system","uid":"435cbf4e-148f-46a7-894c-73bea3a2bb9c","resourceVersion":"386","creationTimestamp":"2023-01-24T17:46:10Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0124 17:46:20.404298  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:20.404310  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.404317  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.404323  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.405746  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:20.405764  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.405773  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.405782  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.405790  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.405802  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.405813  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.405823  128080 round_trippers.go:580]     Audit-Id: 5caf9958-5dbd-409f-8628-ecf17330c11d
	I0124 17:46:20.405906  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:20.406176  128080 pod_ready.go:92] pod "kube-proxy-wxrvx" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:20.406188  128080 pod_ready.go:81] duration metric: took 3.928229ms waiting for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.406195  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.584563  128080 request.go:622] Waited for 178.269204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
	I0124 17:46:20.584621  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
	I0124 17:46:20.584640  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.584648  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.584655  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.586797  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:20.586826  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.586836  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.586844  128080 round_trippers.go:580]     Audit-Id: fcf79f37-f1fe-418f-b173-0e39c46a871b
	I0124 17:46:20.586852  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.586861  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.586869  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.586876  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.586969  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-585561","namespace":"kube-system","uid":"99936e13-49bf-4ab3-82ea-812373f654b6","resourceVersion":"291","creationTimestamp":"2023-01-24T17:45:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.mirror":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.seen":"2023-01-24T17:45:47.607460620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0124 17:46:20.784718  128080 request.go:622] Waited for 197.352781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:20.784783  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:20.784788  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.784801  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.784812  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.786986  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:20.787013  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.787023  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.787031  128080 round_trippers.go:580]     Audit-Id: b6d78ce3-520f-4b89-8d8d-a9f8728cabd2
	I0124 17:46:20.787039  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.787047  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.787055  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.787065  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.787161  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
	I0124 17:46:20.787497  128080 pod_ready.go:92] pod "kube-scheduler-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:20.787509  128080 pod_ready.go:81] duration metric: took 381.309469ms waiting for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:20.787519  128080 pod_ready.go:38] duration metric: took 8.428834418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0124 17:46:20.787536  128080 api_server.go:51] waiting for apiserver process to appear ...
	I0124 17:46:20.787613  128080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 17:46:20.797528  128080 command_runner.go:130] > 2612
	I0124 17:46:20.797565  128080 api_server.go:71] duration metric: took 8.958192044s to wait for apiserver process to appear ...
	I0124 17:46:20.797584  128080 api_server.go:87] waiting for apiserver healthz status ...
	I0124 17:46:20.797595  128080 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0124 17:46:20.800990  128080 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0124 17:46:20.801052  128080 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0124 17:46:20.801063  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.801075  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.801089  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.801726  128080 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0124 17:46:20.801741  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.801748  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.801754  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.801759  128080 round_trippers.go:580]     Content-Length: 263
	I0124 17:46:20.801765  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.801773  128080 round_trippers.go:580]     Audit-Id: caa658d8-bf09-4459-8140-61696daba67d
	I0124 17:46:20.801778  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.801788  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.801804  128080 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0124 17:46:20.801875  128080 api_server.go:140] control plane version: v1.26.1
	I0124 17:46:20.801887  128080 api_server.go:130] duration metric: took 4.299117ms to wait for apiserver health ...
	I0124 17:46:20.801894  128080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0124 17:46:20.985277  128080 request.go:622] Waited for 183.329754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0124 17:46:20.985347  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0124 17:46:20.985384  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:20.985399  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:20.985410  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:20.988263  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:20.988290  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:20.988301  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:20.988319  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:20.988328  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:20 GMT
	I0124 17:46:20.988334  128080 round_trippers.go:580]     Audit-Id: cf858d98-7d36-4bcd-8adf-c08e3b82112d
	I0124 17:46:20.988342  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:20.988349  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:20.988790  128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54922 chars]
	I0124 17:46:20.990484  128080 system_pods.go:59] 8 kube-system pods found
	I0124 17:46:20.990503  128080 system_pods.go:61] "coredns-787d4945fb-lfdwf" [3ad6d110-548d-4cec-bae8-945a1e7d7853] Running
	I0124 17:46:20.990508  128080 system_pods.go:61] "etcd-multinode-585561" [e90a4912-09cf-4017-b275-36e5cbaf8fb7] Running
	I0124 17:46:20.990515  128080 system_pods.go:61] "kindnet-4zggw" [17440d73-e612-44ea-a341-3d018744042f] Running
	I0124 17:46:20.990519  128080 system_pods.go:61] "kube-apiserver-multinode-585561" [b6111d69-e414-4456-b981-c45749f2bc69] Running
	I0124 17:46:20.990526  128080 system_pods.go:61] "kube-controller-manager-multinode-585561" [64983300-9251-4324-9c8a-e0ff30ae4238] Running
	I0124 17:46:20.990530  128080 system_pods.go:61] "kube-proxy-wxrvx" [435cbf4e-148f-46a7-894c-73bea3a2bb9c] Running
	I0124 17:46:20.990535  128080 system_pods.go:61] "kube-scheduler-multinode-585561" [99936e13-49bf-4ab3-82ea-812373f654b6] Running
	I0124 17:46:20.990541  128080 system_pods.go:61] "storage-provisioner" [f521d253-9340-4d51-b6da-fa5443e09527] Running
	I0124 17:46:20.990546  128080 system_pods.go:74] duration metric: took 188.648411ms to wait for pod list to return data ...
	I0124 17:46:20.990557  128080 default_sa.go:34] waiting for default service account to be created ...
	I0124 17:46:21.185033  128080 request.go:622] Waited for 194.413978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0124 17:46:21.185083  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0124 17:46:21.185089  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:21.185097  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:21.185107  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:21.187323  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:21.187344  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:21.187351  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:21.187357  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:21.187363  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:21.187368  128080 round_trippers.go:580]     Content-Length: 261
	I0124 17:46:21.187373  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:21 GMT
	I0124 17:46:21.187381  128080 round_trippers.go:580]     Audit-Id: 015c9f4b-5f6d-4c61-976a-399fcd3f6df6
	I0124 17:46:21.187386  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:21.187408  128080 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"15e42089-c766-4da1-bcdd-80148829615f","resourceVersion":"329","creationTimestamp":"2023-01-24T17:46:10Z"}}]}
	I0124 17:46:21.187565  128080 default_sa.go:45] found service account: "default"
	I0124 17:46:21.187577  128080 default_sa.go:55] duration metric: took 197.0119ms for default service account to be created ...
	I0124 17:46:21.187586  128080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0124 17:46:21.384805  128080 request.go:622] Waited for 197.148231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0124 17:46:21.384950  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0124 17:46:21.384979  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:21.384992  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:21.385006  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:21.387897  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:21.387922  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:21.387934  128080 round_trippers.go:580]     Audit-Id: 9c05e512-ec8a-4da9-8ec1-9fc849724916
	I0124 17:46:21.387940  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:21.387950  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:21.387959  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:21.387968  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:21.387980  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:21 GMT
	I0124 17:46:21.388356  128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54922 chars]
	I0124 17:46:21.390057  128080 system_pods.go:86] 8 kube-system pods found
	I0124 17:46:21.390090  128080 system_pods.go:89] "coredns-787d4945fb-lfdwf" [3ad6d110-548d-4cec-bae8-945a1e7d7853] Running
	I0124 17:46:21.390096  128080 system_pods.go:89] "etcd-multinode-585561" [e90a4912-09cf-4017-b275-36e5cbaf8fb7] Running
	I0124 17:46:21.390100  128080 system_pods.go:89] "kindnet-4zggw" [17440d73-e612-44ea-a341-3d018744042f] Running
	I0124 17:46:21.390107  128080 system_pods.go:89] "kube-apiserver-multinode-585561" [b6111d69-e414-4456-b981-c45749f2bc69] Running
	I0124 17:46:21.390112  128080 system_pods.go:89] "kube-controller-manager-multinode-585561" [64983300-9251-4324-9c8a-e0ff30ae4238] Running
	I0124 17:46:21.390117  128080 system_pods.go:89] "kube-proxy-wxrvx" [435cbf4e-148f-46a7-894c-73bea3a2bb9c] Running
	I0124 17:46:21.390121  128080 system_pods.go:89] "kube-scheduler-multinode-585561" [99936e13-49bf-4ab3-82ea-812373f654b6] Running
	I0124 17:46:21.390127  128080 system_pods.go:89] "storage-provisioner" [f521d253-9340-4d51-b6da-fa5443e09527] Running
	I0124 17:46:21.390133  128080 system_pods.go:126] duration metric: took 202.542832ms to wait for k8s-apps to be running ...
	I0124 17:46:21.390141  128080 system_svc.go:44] waiting for kubelet service to be running ....
	I0124 17:46:21.390180  128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 17:46:21.399718  128080 system_svc.go:56] duration metric: took 9.569922ms WaitForService to wait for kubelet.
	I0124 17:46:21.399739  128080 kubeadm.go:578] duration metric: took 9.560367944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0124 17:46:21.399757  128080 node_conditions.go:102] verifying NodePressure condition ...
	I0124 17:46:21.585188  128080 request.go:622] Waited for 185.358538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0124 17:46:21.585240  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0124 17:46:21.585245  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:21.585253  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:21.585259  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:21.587376  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:21.587398  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:21.587409  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:21.587419  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:21.587430  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:21.587443  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:21.587452  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:21 GMT
	I0124 17:46:21.587458  128080 round_trippers.go:580]     Audit-Id: f6fbab4c-0ab2-405b-92c7-e741f3432606
	I0124 17:46:21.587556  128080 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5261 chars]
	I0124 17:46:21.588052  128080 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0124 17:46:21.588075  128080 node_conditions.go:123] node cpu capacity is 8
	I0124 17:46:21.588089  128080 node_conditions.go:105] duration metric: took 188.325958ms to run NodePressure ...
	I0124 17:46:21.588105  128080 start.go:226] waiting for startup goroutines ...
	I0124 17:46:21.590706  128080 out.go:177] 
	I0124 17:46:21.592588  128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:46:21.592702  128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
	I0124 17:46:21.594822  128080 out.go:177] * Starting worker node multinode-585561-m02 in cluster multinode-585561
	I0124 17:46:21.596265  128080 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 17:46:21.597851  128080 out.go:177] * Pulling base image ...
	I0124 17:46:21.599887  128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:46:21.599916  128080 cache.go:57] Caching tarball of preloaded images
	I0124 17:46:21.599989  128080 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 17:46:21.600028  128080 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 17:46:21.600038  128080 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0124 17:46:21.600133  128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
	I0124 17:46:21.625597  128080 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 17:46:21.625617  128080 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 17:46:21.625641  128080 cache.go:193] Successfully downloaded all kic artifacts
	I0124 17:46:21.625676  128080 start.go:364] acquiring machines lock for multinode-585561-m02: {Name:mkf9f5cd760f22fd0c5ef803f9e297631aab81d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 17:46:21.625790  128080 start.go:368] acquired machines lock for "multinode-585561-m02" in 94.878µs
	I0124 17:46:21.625821  128080 start.go:93] Provisioning new machine with config: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0124 17:46:21.625908  128080 start.go:125] createHost starting for "m02" (driver="docker")
	I0124 17:46:21.629423  128080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0124 17:46:21.629553  128080 start.go:159] libmachine.API.Create for "multinode-585561" (driver="docker")
	I0124 17:46:21.629585  128080 client.go:168] LocalClient.Create starting
	I0124 17:46:21.629672  128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem
	I0124 17:46:21.629706  128080 main.go:141] libmachine: Decoding PEM data...
	I0124 17:46:21.629723  128080 main.go:141] libmachine: Parsing certificate...
	I0124 17:46:21.629773  128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem
	I0124 17:46:21.629794  128080 main.go:141] libmachine: Decoding PEM data...
	I0124 17:46:21.629805  128080 main.go:141] libmachine: Parsing certificate...
	I0124 17:46:21.630003  128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 17:46:21.652201  128080 network_create.go:76] Found existing network {name:multinode-585561 subnet:0xc0010ec180 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0124 17:46:21.652254  128080 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-585561-m02" container
	I0124 17:46:21.652314  128080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0124 17:46:21.673644  128080 cli_runner.go:164] Run: docker volume create multinode-585561-m02 --label name.minikube.sigs.k8s.io=multinode-585561-m02 --label created_by.minikube.sigs.k8s.io=true
	I0124 17:46:21.696938  128080 oci.go:103] Successfully created a docker volume multinode-585561-m02
	I0124 17:46:21.697007  128080 cli_runner.go:164] Run: docker run --rm --name multinode-585561-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561-m02 --entrypoint /usr/bin/test -v multinode-585561-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0124 17:46:22.233519  128080 oci.go:107] Successfully prepared a docker volume multinode-585561-m02
	I0124 17:46:22.233553  128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:46:22.233571  128080 kic.go:190] Starting extracting preloaded images to volume ...
	I0124 17:46:22.233649  128080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0124 17:46:27.669677  128080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (5.43597092s)
	I0124 17:46:27.669726  128080 kic.go:199] duration metric: took 5.436150 seconds to extract preloaded images to volume
	W0124 17:46:27.669858  128080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0124 17:46:27.669961  128080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0124 17:46:27.764315  128080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-585561-m02 --name multinode-585561-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-585561-m02 --network multinode-585561 --ip 192.168.58.3 --volume multinode-585561-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0124 17:46:28.127108  128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Running}}
	I0124 17:46:28.152113  128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
	I0124 17:46:28.179018  128080 cli_runner.go:164] Run: docker exec multinode-585561-m02 stat /var/lib/dpkg/alternatives/iptables
	I0124 17:46:28.228386  128080 oci.go:144] the created container "multinode-585561-m02" has a running status.
	I0124 17:46:28.228422  128080 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa...
	I0124 17:46:28.453417  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0124 17:46:28.453456  128080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0124 17:46:28.529815  128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
	I0124 17:46:28.558374  128080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0124 17:46:28.558401  128080 kic_runner.go:114] Args: [docker exec --privileged multinode-585561-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0124 17:46:28.635508  128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
	I0124 17:46:28.660755  128080 machine.go:88] provisioning docker machine ...
	I0124 17:46:28.660792  128080 ubuntu.go:169] provisioning hostname "multinode-585561-m02"
	I0124 17:46:28.660865  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:28.684684  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:46:28.684859  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0124 17:46:28.684875  128080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-585561-m02 && echo "multinode-585561-m02" | sudo tee /etc/hostname
	I0124 17:46:28.826545  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561-m02
	
	I0124 17:46:28.826632  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:28.851597  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:46:28.851770  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0124 17:46:28.851798  128080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-585561-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-585561-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 17:46:28.984415  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 17:46:28.984451  128080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
	I0124 17:46:28.984466  128080 ubuntu.go:177] setting up certificates
	I0124 17:46:28.984473  128080 provision.go:83] configureAuth start
	I0124 17:46:28.984561  128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m02
	I0124 17:46:29.008459  128080 provision.go:138] copyHostCerts
	I0124 17:46:29.008526  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
	I0124 17:46:29.008556  128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
	I0124 17:46:29.008567  128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
	I0124 17:46:29.008643  128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
	I0124 17:46:29.008738  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
	I0124 17:46:29.008756  128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
	I0124 17:46:29.008760  128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
	I0124 17:46:29.008784  128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
	I0124 17:46:29.008825  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
	I0124 17:46:29.008838  128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
	I0124 17:46:29.008844  128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
	I0124 17:46:29.008863  128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
	I0124 17:46:29.008904  128080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561-m02]
	I0124 17:46:29.066247  128080 provision.go:172] copyRemoteCerts
	I0124 17:46:29.066297  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 17:46:29.066330  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:29.090529  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
	I0124 17:46:29.187623  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0124 17:46:29.187688  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 17:46:29.204477  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0124 17:46:29.204555  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0124 17:46:29.221516  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0124 17:46:29.221578  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0124 17:46:29.238801  128080 provision.go:86] duration metric: configureAuth took 254.318022ms
	I0124 17:46:29.238830  128080 ubuntu.go:193] setting minikube options for container-runtime
	I0124 17:46:29.239005  128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:46:29.239054  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:29.262199  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:46:29.262383  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0124 17:46:29.262402  128080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 17:46:29.392680  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 17:46:29.392712  128080 ubuntu.go:71] root file system type: overlay
	I0124 17:46:29.392935  128080 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 17:46:29.392998  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:29.416887  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:46:29.417037  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0124 17:46:29.417098  128080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 17:46:29.557121  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 17:46:29.557187  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:29.582230  128080 main.go:141] libmachine: Using SSH client type: native
	I0124 17:46:29.582371  128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0124 17:46:29.582391  128080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 17:46:30.222175  128080 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 17:46:29.552301730 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.58.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0124 17:46:30.222205  128080 machine.go:91] provisioned docker machine in 1.56142627s
	I0124 17:46:30.222217  128080 client.go:171] LocalClient.Create took 8.592619612s
	I0124 17:46:30.222229  128080 start.go:167] duration metric: libmachine.API.Create for "multinode-585561" took 8.592676152s
	I0124 17:46:30.222237  128080 start.go:300] post-start starting for "multinode-585561-m02" (driver="docker")
	I0124 17:46:30.222244  128080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 17:46:30.222302  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 17:46:30.222343  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:30.248305  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
	I0124 17:46:30.340451  128080 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 17:46:30.343294  128080 command_runner.go:130] > NAME="Ubuntu"
	I0124 17:46:30.343330  128080 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0124 17:46:30.343335  128080 command_runner.go:130] > ID=ubuntu
	I0124 17:46:30.343340  128080 command_runner.go:130] > ID_LIKE=debian
	I0124 17:46:30.343345  128080 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0124 17:46:30.343349  128080 command_runner.go:130] > VERSION_ID="20.04"
	I0124 17:46:30.343362  128080 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0124 17:46:30.343371  128080 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0124 17:46:30.343384  128080 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0124 17:46:30.343399  128080 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0124 17:46:30.343409  128080 command_runner.go:130] > VERSION_CODENAME=focal
	I0124 17:46:30.343419  128080 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0124 17:46:30.343470  128080 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 17:46:30.343485  128080 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 17:46:30.343493  128080 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 17:46:30.343499  128080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 17:46:30.343510  128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
	I0124 17:46:30.343562  128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
	I0124 17:46:30.343616  128080 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
	I0124 17:46:30.343625  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /etc/ssl/certs/101262.pem
	I0124 17:46:30.343688  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 17:46:30.350606  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
	I0124 17:46:30.368431  128080 start.go:303] post-start completed in 146.178177ms
	I0124 17:46:30.368831  128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m02
	I0124 17:46:30.392658  128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
	I0124 17:46:30.392914  128080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 17:46:30.392951  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:30.415793  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
	I0124 17:46:30.504850  128080 command_runner.go:130] > 23%!
	(MISSING)I0124 17:46:30.504919  128080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 17:46:30.508408  128080 command_runner.go:130] > 225G
	I0124 17:46:30.508596  128080 start.go:128] duration metric: createHost completed in 8.882674072s
	I0124 17:46:30.508619  128080 start.go:83] releasing machines lock for "multinode-585561-m02", held for 8.882814181s
	I0124 17:46:30.508680  128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m02
	I0124 17:46:30.534666  128080 out.go:177] * Found network options:
	I0124 17:46:30.536354  128080 out.go:177]   - NO_PROXY=192.168.58.2
	W0124 17:46:30.537678  128080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0124 17:46:30.537736  128080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0124 17:46:30.537807  128080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 17:46:30.537845  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:30.537921  128080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0124 17:46:30.537971  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:46:30.564602  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
	I0124 17:46:30.565920  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
	I0124 17:46:30.652966  128080 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0124 17:46:30.653005  128080 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0124 17:46:30.653015  128080 command_runner.go:130] > Device: e3h/227d	Inode: 538245      Links: 1
	I0124 17:46:30.653022  128080 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0124 17:46:30.653028  128080 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0124 17:46:30.653033  128080 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0124 17:46:30.653037  128080 command_runner.go:130] > Change: 2023-01-24 17:29:01.213660493 +0000
	I0124 17:46:30.653041  128080 command_runner.go:130] >  Birth: -
	I0124 17:46:30.681368  128080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0124 17:46:30.682818  128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 17:46:30.703226  128080 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 17:46:30.703353  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0124 17:46:30.710062  128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0124 17:46:30.722812  128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0124 17:46:30.738206  128080 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0124 17:46:30.738246  128080 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0124 17:46:30.738259  128080 start.go:472] detecting cgroup driver to use...
	I0124 17:46:30.738295  128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 17:46:30.738451  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 17:46:30.751102  128080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0124 17:46:30.751129  128080 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0124 17:46:30.751194  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0124 17:46:30.758786  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 17:46:30.766465  128080 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 17:46:30.766521  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 17:46:30.774158  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 17:46:30.781590  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 17:46:30.789295  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 17:46:30.796961  128080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 17:46:30.803813  128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 17:46:30.811556  128080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 17:46:30.817890  128080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0124 17:46:30.817965  128080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 17:46:30.824328  128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:46:30.895621  128080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 17:46:30.978862  128080 start.go:472] detecting cgroup driver to use...
	I0124 17:46:30.978911  128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 17:46:30.978956  128080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 17:46:30.989300  128080 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0124 17:46:30.989325  128080 command_runner.go:130] > [Unit]
	I0124 17:46:30.989335  128080 command_runner.go:130] > Description=Docker Application Container Engine
	I0124 17:46:30.989343  128080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0124 17:46:30.989354  128080 command_runner.go:130] > BindsTo=containerd.service
	I0124 17:46:30.989364  128080 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0124 17:46:30.989373  128080 command_runner.go:130] > Wants=network-online.target
	I0124 17:46:30.989387  128080 command_runner.go:130] > Requires=docker.socket
	I0124 17:46:30.989397  128080 command_runner.go:130] > StartLimitBurst=3
	I0124 17:46:30.989407  128080 command_runner.go:130] > StartLimitIntervalSec=60
	I0124 17:46:30.989413  128080 command_runner.go:130] > [Service]
	I0124 17:46:30.989422  128080 command_runner.go:130] > Type=notify
	I0124 17:46:30.989432  128080 command_runner.go:130] > Restart=on-failure
	I0124 17:46:30.989443  128080 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0124 17:46:30.989458  128080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0124 17:46:30.989475  128080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0124 17:46:30.989488  128080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0124 17:46:30.989503  128080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0124 17:46:30.989517  128080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0124 17:46:30.989531  128080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0124 17:46:30.989545  128080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0124 17:46:30.989562  128080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0124 17:46:30.989576  128080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0124 17:46:30.989586  128080 command_runner.go:130] > ExecStart=
	I0124 17:46:30.989610  128080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0124 17:46:30.989621  128080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0124 17:46:30.989633  128080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0124 17:46:30.989646  128080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0124 17:46:30.989652  128080 command_runner.go:130] > LimitNOFILE=infinity
	I0124 17:46:30.989661  128080 command_runner.go:130] > LimitNPROC=infinity
	I0124 17:46:30.989670  128080 command_runner.go:130] > LimitCORE=infinity
	I0124 17:46:30.989682  128080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0124 17:46:30.989693  128080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0124 17:46:30.989702  128080 command_runner.go:130] > TasksMax=infinity
	I0124 17:46:30.989709  128080 command_runner.go:130] > TimeoutStartSec=0
	I0124 17:46:30.989722  128080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0124 17:46:30.989733  128080 command_runner.go:130] > Delegate=yes
	I0124 17:46:30.989747  128080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0124 17:46:30.989758  128080 command_runner.go:130] > KillMode=process
	I0124 17:46:30.989768  128080 command_runner.go:130] > [Install]
	I0124 17:46:30.989775  128080 command_runner.go:130] > WantedBy=multi-user.target
	I0124 17:46:30.989800  128080 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 17:46:30.989838  128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 17:46:30.998964  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 17:46:31.012139  128080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0124 17:46:31.012171  128080 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0124 17:46:31.013058  128080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 17:46:31.093404  128080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 17:46:31.184202  128080 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 17:46:31.184231  128080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 17:46:31.198354  128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:46:31.284809  128080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 17:46:31.487366  128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 17:46:31.564876  128080 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0124 17:46:31.564948  128080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 17:46:31.634912  128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 17:46:31.711104  128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 17:46:31.783648  128080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 17:46:31.794622  128080 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 17:46:31.794685  128080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 17:46:31.797837  128080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0124 17:46:31.797859  128080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0124 17:46:31.797868  128080 command_runner.go:130] > Device: ech/236d	Inode: 206         Links: 1
	I0124 17:46:31.797879  128080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0124 17:46:31.797893  128080 command_runner.go:130] > Access: 2023-01-24 17:46:31.788460948 +0000
	I0124 17:46:31.797903  128080 command_runner.go:130] > Modify: 2023-01-24 17:46:31.788460948 +0000
	I0124 17:46:31.797912  128080 command_runner.go:130] > Change: 2023-01-24 17:46:31.792461232 +0000
	I0124 17:46:31.797921  128080 command_runner.go:130] >  Birth: -
	I0124 17:46:31.797948  128080 start.go:540] Will wait 60s for crictl version
	I0124 17:46:31.797987  128080 ssh_runner.go:195] Run: which crictl
	I0124 17:46:31.800421  128080 command_runner.go:130] > /usr/bin/crictl
	I0124 17:46:31.800636  128080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 17:46:31.888765  128080 command_runner.go:130] > Version:  0.1.0
	I0124 17:46:31.888788  128080 command_runner.go:130] > RuntimeName:  docker
	I0124 17:46:31.888794  128080 command_runner.go:130] > RuntimeVersion:  20.10.22
	I0124 17:46:31.888800  128080 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0124 17:46:31.890395  128080 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 17:46:31.890445  128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 17:46:31.917315  128080 command_runner.go:130] > 20.10.22
	I0124 17:46:31.917381  128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 17:46:31.945338  128080 command_runner.go:130] > 20.10.22
	I0124 17:46:31.951194  128080 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 17:46:31.952812  128080 out.go:177]   - env NO_PROXY=192.168.58.2
	I0124 17:46:31.954263  128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 17:46:31.977477  128080 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0124 17:46:31.980857  128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 17:46:31.989900  128080 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.3
	I0124 17:46:31.989931  128080 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:46:31.990057  128080 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
	I0124 17:46:31.990090  128080 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
	I0124 17:46:31.990103  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0124 17:46:31.990113  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0124 17:46:31.990124  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0124 17:46:31.990134  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0124 17:46:31.990181  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
	W0124 17:46:31.990210  128080 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
	I0124 17:46:31.990219  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
	I0124 17:46:31.990240  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
	I0124 17:46:31.990261  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
	I0124 17:46:31.990280  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
	I0124 17:46:31.990320  128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
	I0124 17:46:31.990344  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem -> /usr/share/ca-certificates/10126.pem
	I0124 17:46:31.990355  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /usr/share/ca-certificates/101262.pem
	I0124 17:46:31.990365  128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:46:31.990751  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 17:46:32.007864  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0124 17:46:32.024362  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 17:46:32.041021  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 17:46:32.057521  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
	I0124 17:46:32.074154  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
	I0124 17:46:32.091213  128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 17:46:32.108020  128080 ssh_runner.go:195] Run: openssl version
	I0124 17:46:32.112413  128080 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0124 17:46:32.112532  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 17:46:32.119496  128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:46:32.122638  128080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:46:32.122706  128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:46:32.122770  128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 17:46:32.127189  128080 command_runner.go:130] > b5213941
	I0124 17:46:32.127363  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 17:46:32.134494  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
	I0124 17:46:32.141565  128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
	I0124 17:46:32.144437  128080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
	I0124 17:46:32.144456  128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
	I0124 17:46:32.144492  128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
	I0124 17:46:32.148977  128080 command_runner.go:130] > 51391683
	I0124 17:46:32.149030  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
	I0124 17:46:32.156508  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
	I0124 17:46:32.165557  128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
	I0124 17:46:32.168669  128080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
	I0124 17:46:32.168742  128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
	I0124 17:46:32.168795  128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
	I0124 17:46:32.173480  128080 command_runner.go:130] > 3ec20f2e
	I0124 17:46:32.173540  128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 17:46:32.180694  128080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 17:46:32.245327  128080 command_runner.go:130] > cgroupfs
	I0124 17:46:32.246641  128080 cni.go:84] Creating CNI manager for ""
	I0124 17:46:32.246655  128080 cni.go:136] 2 nodes found, recommending kindnet
	I0124 17:46:32.246665  128080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 17:46:32.246680  128080 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 17:46:32.246792  128080 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-585561-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 17:46:32.246888  128080 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 17:46:32.246936  128080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0124 17:46:32.253621  128080 command_runner.go:130] > kubeadm
	I0124 17:46:32.253639  128080 command_runner.go:130] > kubectl
	I0124 17:46:32.253669  128080 command_runner.go:130] > kubelet
	I0124 17:46:32.254172  128080 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 17:46:32.254230  128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0124 17:46:32.260899  128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0124 17:46:32.273147  128080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 17:46:32.285590  128080 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0124 17:46:32.288442  128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 17:46:32.297638  128080 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:46:32.297886  128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:46:32.297915  128080 start.go:288] JoinCluster: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:46:32.298021  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0124 17:46:32.298060  128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:46:32.321952  128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:46:32.745020  128080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zt8ug3.1xbm3om5dnjax0pw --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 
	I0124 17:46:32.745095  128080 start.go:309] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0124 17:46:32.745129  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt8ug3.1xbm3om5dnjax0pw --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m02"
	I0124 17:46:32.780280  128080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0124 17:46:32.807385  128080 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0124 17:46:32.807411  128080 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1027-gcp
	I0124 17:46:32.807420  128080 command_runner.go:130] > OS: Linux
	I0124 17:46:32.807427  128080 command_runner.go:130] > CGROUPS_CPU: enabled
	I0124 17:46:32.807433  128080 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0124 17:46:32.807438  128080 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0124 17:46:32.807443  128080 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0124 17:46:32.807448  128080 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0124 17:46:32.807452  128080 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0124 17:46:32.807462  128080 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0124 17:46:32.807472  128080 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0124 17:46:32.807476  128080 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0124 17:46:32.885368  128080 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0124 17:46:32.885392  128080 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0124 17:46:32.916258  128080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 17:46:32.916285  128080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 17:46:32.916291  128080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0124 17:46:32.996754  128080 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0124 17:46:34.515170  128080 command_runner.go:130] > This node has joined the cluster:
	I0124 17:46:34.515247  128080 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0124 17:46:34.515261  128080 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0124 17:46:34.515272  128080 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0124 17:46:34.517967  128080 command_runner.go:130] ! W0124 17:46:32.779878    1352 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0124 17:46:34.518001  128080 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
	I0124 17:46:34.518016  128080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 17:46:34.518043  128080 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt8ug3.1xbm3om5dnjax0pw --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m02": (1.772900386s)
	I0124 17:46:34.518066  128080 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0124 17:46:34.673090  128080 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0124 17:46:34.673122  128080 start.go:290] JoinCluster complete in 2.375206941s
	I0124 17:46:34.673134  128080 cni.go:84] Creating CNI manager for ""
	I0124 17:46:34.673141  128080 cni.go:136] 2 nodes found, recommending kindnet
	I0124 17:46:34.673199  128080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0124 17:46:34.676417  128080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0124 17:46:34.676441  128080 command_runner.go:130] >   Size: 2828728   	Blocks: 5536       IO Block: 4096   regular file
	I0124 17:46:34.676452  128080 command_runner.go:130] > Device: 34h/52d	Inode: 535835      Links: 1
	I0124 17:46:34.676461  128080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0124 17:46:34.676475  128080 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0124 17:46:34.676487  128080 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0124 17:46:34.676507  128080 command_runner.go:130] > Change: 2023-01-24 17:29:00.473607792 +0000
	I0124 17:46:34.676514  128080 command_runner.go:130] >  Birth: -
	I0124 17:46:34.676563  128080 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0124 17:46:34.676578  128080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0124 17:46:34.689706  128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0124 17:46:34.866008  128080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0124 17:46:34.869059  128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0124 17:46:34.871106  128080 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0124 17:46:34.880587  128080 command_runner.go:130] > daemonset.apps/kindnet configured
	I0124 17:46:34.884679  128080 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:46:34.884894  128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 17:46:34.885165  128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0124 17:46:34.885175  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.885183  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.885189  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.886855  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.886871  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.886878  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.886883  128080 round_trippers.go:580]     Audit-Id: 497f5b19-393d-4b16-893c-1a13ae3475f1
	I0124 17:46:34.886888  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.886893  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.886899  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.886907  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.886917  128080 round_trippers.go:580]     Content-Length: 291
	I0124 17:46:34.886949  128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"425","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0124 17:46:34.887042  128080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-585561" context rescaled to 1 replicas
	I0124 17:46:34.887072  128080 start.go:221] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0124 17:46:34.891190  128080 out.go:177] * Verifying Kubernetes components...
	I0124 17:46:34.892753  128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 17:46:34.902365  128080 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:46:34.902604  128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 17:46:34.902881  128080 node_ready.go:35] waiting up to 6m0s for node "multinode-585561-m02" to be "Ready" ...
	I0124 17:46:34.902939  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
	I0124 17:46:34.902950  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.902961  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.902971  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.905107  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:34.905129  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.905141  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.905150  128080 round_trippers.go:580]     Audit-Id: d8139444-727b-4f5f-bbc8-eb054eef8fe7
	I0124 17:46:34.905163  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.905177  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.905186  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.905195  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.905314  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
	I0124 17:46:34.905658  128080 node_ready.go:49] node "multinode-585561-m02" has status "Ready":"True"
	I0124 17:46:34.905673  128080 node_ready.go:38] duration metric: took 2.777735ms waiting for node "multinode-585561-m02" to be "Ready" ...
	I0124 17:46:34.905682  128080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0124 17:46:34.905749  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0124 17:46:34.905760  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.905771  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.905780  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.908622  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:34.908642  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.908652  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.908661  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.908669  128080 round_trippers.go:580]     Audit-Id: 79701e73-3f1c-4466-99c3-cd0d2cee8949
	I0124 17:46:34.908682  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.908694  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.908704  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.910359  128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"468"},"items":[{"metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65261 chars]
	I0124 17:46:34.913016  128080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.913075  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
	I0124 17:46:34.913084  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.913091  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.913097  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.914706  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.914721  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.914731  128080 round_trippers.go:580]     Audit-Id: e7b2d476-b7a8-4125-b812-f0dc6ea3efa1
	I0124 17:46:34.914757  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.914770  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.914779  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.914790  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.914801  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.914924  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5901 chars]
	I0124 17:46:34.915322  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:34.915334  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.915342  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.915348  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.917048  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.917063  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.917069  128080 round_trippers.go:580]     Audit-Id: 7b8544d5-7c5b-41d4-a28e-160f995249fc
	I0124 17:46:34.917075  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.917080  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.917085  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.917092  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.917100  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.917221  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
	I0124 17:46:34.917486  128080 pod_ready.go:92] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:34.917495  128080 pod_ready.go:81] duration metric: took 4.459856ms waiting for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.917504  128080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.917543  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-585561
	I0124 17:46:34.917550  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.917557  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.917565  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.919030  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.919043  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.919050  128080 round_trippers.go:580]     Audit-Id: 3b87f326-ba3e-47f2-8be8-98a00127b8a3
	I0124 17:46:34.919056  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.919064  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.919075  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.919085  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.919100  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.919178  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-585561","namespace":"kube-system","uid":"e90a4912-09cf-4017-b275-36e5cbaf8fb7","resourceVersion":"307","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.mirror":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.seen":"2023-01-24T17:45:58.071793905Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5806 chars]
	I0124 17:46:34.919507  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:34.919519  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.919526  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.919532  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.921109  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.921129  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.921139  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.921148  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.921157  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.921166  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.921178  128080 round_trippers.go:580]     Audit-Id: 802452fc-eef0-42bd-b7f5-9b7b735dd955
	I0124 17:46:34.921188  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.921312  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
	I0124 17:46:34.921668  128080 pod_ready.go:92] pod "etcd-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:34.921679  128080 pod_ready.go:81] duration metric: took 4.167237ms waiting for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.921691  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.921728  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-585561
	I0124 17:46:34.921739  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.921746  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.921755  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.923272  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.923289  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.923298  128080 round_trippers.go:580]     Audit-Id: 384d6620-f3dc-41f6-9df5-dbbaca36ccaa
	I0124 17:46:34.923307  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.923319  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.923331  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.923343  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.923354  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.923466  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-585561","namespace":"kube-system","uid":"b6111d69-e414-4456-b981-c45749f2bc69","resourceVersion":"270","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.mirror":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.seen":"2023-01-24T17:45:58.071829413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0124 17:46:34.923874  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:34.923887  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.923894  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.923900  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.925292  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.925310  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.925322  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.925330  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.925341  128080 round_trippers.go:580]     Audit-Id: c5e7c1a0-f4ec-442e-952d-a055413f47d7
	I0124 17:46:34.925354  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.925361  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.925369  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.925433  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
	I0124 17:46:34.925680  128080 pod_ready.go:92] pod "kube-apiserver-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:34.925689  128080 pod_ready.go:81] duration metric: took 3.992929ms waiting for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.925697  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.925730  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-585561
	I0124 17:46:34.925742  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.925748  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.925757  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.927091  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.927111  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.927120  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.927130  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.927140  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.927150  128080 round_trippers.go:580]     Audit-Id: 1da2c34e-098b-4751-9996-757f61d88dde
	I0124 17:46:34.927163  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.927174  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.927260  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-585561","namespace":"kube-system","uid":"64983300-9251-4324-9c8a-e0ff30ae4238","resourceVersion":"385","creationTimestamp":"2023-01-24T17:45:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.mirror":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.seen":"2023-01-24T17:45:47.607485043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0124 17:46:34.927616  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:34.927628  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:34.927635  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:34.927641  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:34.929053  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:34.929071  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:34.929081  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:34 GMT
	I0124 17:46:34.929090  128080 round_trippers.go:580]     Audit-Id: 33fd5bb9-63ee-49e7-be22-f13c8137dae6
	I0124 17:46:34.929098  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:34.929106  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:34.929116  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:34.929128  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:34.929242  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
	I0124 17:46:34.929489  128080 pod_ready.go:92] pod "kube-controller-manager-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:34.929498  128080 pod_ready.go:81] duration metric: took 3.796338ms waiting for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:34.929505  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-txqvw" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:35.103905  128080 request.go:622] Waited for 174.333925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
	I0124 17:46:35.103967  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
	I0124 17:46:35.103974  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:35.103985  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:35.103996  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:35.106149  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:35.106171  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:35.106180  128080 round_trippers.go:580]     Audit-Id: 99042336-cd88-47a2-b43b-213db072d145
	I0124 17:46:35.106188  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:35.106195  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:35.106203  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:35.106213  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:35.106226  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:35 GMT
	I0124 17:46:35.106328  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"458","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0124 17:46:35.302962  128080 request.go:622] Waited for 196.275858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
	I0124 17:46:35.303023  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
	I0124 17:46:35.303027  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:35.303048  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:35.303056  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:35.305148  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:35.305173  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:35.305181  128080 round_trippers.go:580]     Audit-Id: fb8464b5-b29c-4d87-ab16-798859c01b1c
	I0124 17:46:35.305187  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:35.305195  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:35.305204  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:35.305216  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:35.305228  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:35 GMT
	I0124 17:46:35.305345  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
	I0124 17:46:35.806505  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
	I0124 17:46:35.806526  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:35.806534  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:35.806540  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:35.808640  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:35.808661  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:35.808668  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:35.808674  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:35.808679  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:35.808684  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:35 GMT
	I0124 17:46:35.808689  128080 round_trippers.go:580]     Audit-Id: e696418c-adcb-4b81-88da-d3011d643212
	I0124 17:46:35.808696  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:35.808800  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"458","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0124 17:46:35.809158  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
	I0124 17:46:35.809172  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:35.809181  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:35.809187  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:35.810803  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:35.810826  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:35.810835  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:35.810844  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:35.810853  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:35.810862  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:35 GMT
	I0124 17:46:35.810870  128080 round_trippers.go:580]     Audit-Id: 49f42aa1-5ed5-48d8-a566-6c63c175fb19
	I0124 17:46:35.810883  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:35.810948  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
	I0124 17:46:36.306643  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
	I0124 17:46:36.306672  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:36.306684  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:36.306711  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:36.308954  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:36.308982  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:36.308992  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:36.309000  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:36.309008  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:36.309017  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:36.309026  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:36 GMT
	I0124 17:46:36.309041  128080 round_trippers.go:580]     Audit-Id: 27b22d5b-9a69-4d5d-a35e-d2783f53de23
	I0124 17:46:36.309148  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"471","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0124 17:46:36.309567  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
	I0124 17:46:36.309576  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:36.309583  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:36.309590  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:36.311658  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:36.311681  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:36.311690  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:36 GMT
	I0124 17:46:36.311699  128080 round_trippers.go:580]     Audit-Id: 10d56837-ad6e-4dc7-9076-f2437d09e638
	I0124 17:46:36.311706  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:36.311724  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:36.311732  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:36.311745  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:36.311842  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
	I0124 17:46:36.805879  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
	I0124 17:46:36.805900  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:36.805908  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:36.805914  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:36.807989  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:36.808013  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:36.808023  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:36.808032  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:36 GMT
	I0124 17:46:36.808041  128080 round_trippers.go:580]     Audit-Id: 587e119f-2e6c-48e3-8322-0e9f1e8eb42a
	I0124 17:46:36.808049  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:36.808054  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:36.808062  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:36.808187  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"480","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0124 17:46:36.808695  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
	I0124 17:46:36.808708  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:36.808715  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:36.808721  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:36.810522  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:36.810537  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:36.810543  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:36.810549  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:36.810554  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:36.810559  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:36.810567  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:36 GMT
	I0124 17:46:36.810575  128080 round_trippers.go:580]     Audit-Id: f8e0330a-1bb9-40b1-a933-c5f0a1e1bb6b
	I0124 17:46:36.810647  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
	I0124 17:46:36.810929  128080 pod_ready.go:92] pod "kube-proxy-txqvw" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:36.810951  128080 pod_ready.go:81] duration metric: took 1.881438802s waiting for pod "kube-proxy-txqvw" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:36.810962  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:36.811016  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrvx
	I0124 17:46:36.811030  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:36.811037  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:36.811043  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:36.812833  128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0124 17:46:36.812853  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:36.812863  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:36 GMT
	I0124 17:46:36.812872  128080 round_trippers.go:580]     Audit-Id: b13a419b-1fb9-47ba-b3f4-b04fb85c822a
	I0124 17:46:36.812882  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:36.812901  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:36.812916  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:36.812930  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:36.813030  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wxrvx","generateName":"kube-proxy-","namespace":"kube-system","uid":"435cbf4e-148f-46a7-894c-73bea3a2bb9c","resourceVersion":"386","creationTimestamp":"2023-01-24T17:46:10Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0124 17:46:36.903706  128080 request.go:622] Waited for 90.265119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:36.903760  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:36.903766  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:36.903774  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:36.903781  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:36.906031  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:36.906056  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:36.906067  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:36.906076  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:36.906085  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:36.906095  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:36 GMT
	I0124 17:46:36.906105  128080 round_trippers.go:580]     Audit-Id: 945dcc0c-30de-44b3-9dfb-8429cc53242c
	I0124 17:46:36.906114  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:36.906300  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
	I0124 17:46:36.906655  128080 pod_ready.go:92] pod "kube-proxy-wxrvx" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:36.906673  128080 pod_ready.go:81] duration metric: took 95.698205ms waiting for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:36.906683  128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:37.103009  128080 request.go:622] Waited for 196.27182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
	I0124 17:46:37.103069  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
	I0124 17:46:37.103074  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:37.103081  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:37.103088  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:37.105313  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:37.105350  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:37.105362  128080 round_trippers.go:580]     Audit-Id: a41ca2bc-d889-4352-b78b-1657882d4df7
	I0124 17:46:37.105370  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:37.105383  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:37.105395  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:37.105406  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:37.105417  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:37 GMT
	I0124 17:46:37.105524  128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-585561","namespace":"kube-system","uid":"99936e13-49bf-4ab3-82ea-812373f654b6","resourceVersion":"291","creationTimestamp":"2023-01-24T17:45:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.mirror":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.seen":"2023-01-24T17:45:47.607460620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0124 17:46:37.303177  128080 request.go:622] Waited for 197.268104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:37.303239  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
	I0124 17:46:37.303243  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:37.303250  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:37.303256  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:37.305709  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:37.305732  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:37.305742  128080 round_trippers.go:580]     Audit-Id: 742d673f-c898-44bc-8806-324a8af3c921
	I0124 17:46:37.305749  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:37.305754  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:37.305760  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:37.305768  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:37.305780  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:37 GMT
	I0124 17:46:37.305907  128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
	I0124 17:46:37.306205  128080 pod_ready.go:92] pod "kube-scheduler-multinode-585561" in "kube-system" namespace has status "Ready":"True"
	I0124 17:46:37.306214  128080 pod_ready.go:81] duration metric: took 399.525668ms waiting for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
	I0124 17:46:37.306224  128080 pod_ready.go:38] duration metric: took 2.40052908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0124 17:46:37.306241  128080 system_svc.go:44] waiting for kubelet service to be running ....
	I0124 17:46:37.306280  128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 17:46:37.316166  128080 system_svc.go:56] duration metric: took 9.916335ms WaitForService to wait for kubelet.
	I0124 17:46:37.316189  128080 kubeadm.go:578] duration metric: took 2.429088674s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0124 17:46:37.316207  128080 node_conditions.go:102] verifying NodePressure condition ...
	I0124 17:46:37.503635  128080 request.go:622] Waited for 187.342903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0124 17:46:37.503694  128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0124 17:46:37.503698  128080 round_trippers.go:469] Request Headers:
	I0124 17:46:37.503706  128080 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0124 17:46:37.503712  128080 round_trippers.go:473]     Accept: application/json, */*
	I0124 17:46:37.506053  128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0124 17:46:37.506077  128080 round_trippers.go:577] Response Headers:
	I0124 17:46:37.506087  128080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
	I0124 17:46:37.506095  128080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
	I0124 17:46:37.506103  128080 round_trippers.go:580]     Date: Tue, 24 Jan 2023 17:46:37 GMT
	I0124 17:46:37.506111  128080 round_trippers.go:580]     Audit-Id: 047adda2-e523-4f86-b13a-9557d89d91bb
	I0124 17:46:37.506123  128080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0124 17:46:37.506132  128080 round_trippers.go:580]     Content-Type: application/json
	I0124 17:46:37.506263  128080 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"482"},"items":[{"metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10485 chars]
	I0124 17:46:37.506712  128080 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0124 17:46:37.506726  128080 node_conditions.go:123] node cpu capacity is 8
	I0124 17:46:37.506737  128080 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0124 17:46:37.506740  128080 node_conditions.go:123] node cpu capacity is 8
	I0124 17:46:37.506744  128080 node_conditions.go:105] duration metric: took 190.533637ms to run NodePressure ...
	I0124 17:46:37.506753  128080 start.go:226] waiting for startup goroutines ...
	I0124 17:46:37.507007  128080 ssh_runner.go:195] Run: rm -f paused
	I0124 17:46:37.556258  128080 start.go:538] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0124 17:46:37.559978  128080 out.go:177] * Done! kubectl is now configured to use "multinode-585561" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-01-24 17:45:29 UTC, end at Tue 2023-01-24 17:49:46 UTC. --
	Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.633138741Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.633148676Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634323356Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634352100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634365187Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634374211Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355381779Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355410690Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355415966Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355574482Z" level=info msg="Loading containers: start."
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.435750441Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.470587188Z" level=info msg="Loading containers: done."
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.480577419Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.480635872Z" level=info msg="Daemon has completed initialization"
	Jan 24 17:45:39 multinode-585561 systemd[1]: Started Docker Application Container Engine.
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.498943247Z" level=info msg="API listen on [::]:2376"
	Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.502639549Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 24 17:46:12 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:12.460746688Z" level=info msg="ignoring event" container=37e8478501e933582413e297c1e673f6918d6429a97123fb0b07ce4732e4c936 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 24 17:46:12 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:12.577129124Z" level=info msg="ignoring event" container=e50d011e65c7e1f08685495eb187d579f1ae10e39b6888c309fbb45306a4c6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 24 17:46:13 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:13.011027304Z" level=info msg="ignoring event" container=19a971b91106d9708930bbfeba83bc98aab4cc9036d303d4f9f85d0e9193d087 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 24 17:46:14 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:14.352201402Z" level=info msg="ignoring event" container=5a2c5d625a14975e0548f8542d064947537e1d3d93b966e96273b61c7d512044 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 24 17:46:15 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:15.057917460Z" level=info msg="ignoring event" container=2d694add5bf5369c664ffa57535f4fd192341b356eb4489ace3841139b339b6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 24 17:46:16 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:16.200600114Z" level=info msg="ignoring event" container=937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 24 17:46:17 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:17.234437482Z" level=info msg="ignoring event" container=e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 24 17:46:18 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:18.254551352Z" level=info msg="ignoring event" container=2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	fb3da35f45bf5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Running             busybox                   0                   124081a927014
	3e1eaf7c0054a       5185b96f0becf                                                                                         3 minutes ago       Running             coredns                   0                   47a63c2278997
	28cc12c3f1288       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              3 minutes ago       Running             kindnet-cni               0                   9aef0e100458e
	1f47880f5c352       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       0                   99236eb3b001f
	7e5eddf7c5d55       46a6bb3c77ce0                                                                                         3 minutes ago       Running             kube-proxy                0                   747975b188cac
	8d7a8a4801df0       e9c08e11b07f6                                                                                         3 minutes ago       Running             kube-controller-manager   0                   ff9340f5d9bcd
	a8a00c2b5f80f       fce326961ae2d                                                                                         3 minutes ago       Running             etcd                      0                   d7ec06dc1a21d
	8db5094d208be       deb04688c4a35                                                                                         3 minutes ago       Running             kube-apiserver            0                   4fdcbd8bc5041
	8af55922f6ee3       655493523f607                                                                                         3 minutes ago       Running             kube-scheduler            0                   8841f3ddae517
	
	* 
	* ==> coredns [3e1eaf7c0054] <==
	* [INFO] 10.244.0.3:46987 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115946s
	[INFO] 10.244.1.2:33571 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152422s
	[INFO] 10.244.1.2:48583 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001909051s
	[INFO] 10.244.1.2:38742 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115129s
	[INFO] 10.244.1.2:59507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086546s
	[INFO] 10.244.1.2:32834 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001506643s
	[INFO] 10.244.1.2:42786 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069047s
	[INFO] 10.244.1.2:46959 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090386s
	[INFO] 10.244.1.2:54809 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074209s
	[INFO] 10.244.0.3:52431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127147s
	[INFO] 10.244.0.3:59453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123092s
	[INFO] 10.244.0.3:54130 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104928s
	[INFO] 10.244.0.3:50539 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097535s
	[INFO] 10.244.1.2:58908 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133496s
	[INFO] 10.244.1.2:55440 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114378s
	[INFO] 10.244.1.2:57653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117943s
	[INFO] 10.244.1.2:50356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122998s
	[INFO] 10.244.0.3:44564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136332s
	[INFO] 10.244.0.3:48809 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00012631s
	[INFO] 10.244.0.3:34982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215444s
	[INFO] 10.244.0.3:48866 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00012193s
	[INFO] 10.244.1.2:45388 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159094s
	[INFO] 10.244.1.2:32868 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130065s
	[INFO] 10.244.1.2:52927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093822s
	[INFO] 10.244.1.2:44825 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068795s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-585561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-585561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6b2c057f52b907b52814c670e5ac26b018123ade
	                    minikube.k8s.io/name=multinode-585561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_24T17_45_58_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Jan 2023 17:45:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-585561
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Jan 2023 17:49:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Jan 2023 17:46:59 +0000   Tue, 24 Jan 2023 17:45:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Jan 2023 17:46:59 +0000   Tue, 24 Jan 2023 17:45:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Jan 2023 17:46:59 +0000   Tue, 24 Jan 2023 17:45:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Jan 2023 17:46:59 +0000   Tue, 24 Jan 2023 17:46:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-585561
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 11af74b3a18d4d7295d17813eccf6dd7
	  System UUID:                603f4c0b-41f8-4d3d-9b3f-d4e2b09a393b
	  Boot ID:                    202c095e-d1d4-4b92-9c9d-a08c9f26c94d
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.22
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-7rp7j                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 coredns-787d4945fb-lfdwf                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m35s
	  kube-system                 etcd-multinode-585561                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-4zggw                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m36s
	  kube-system                 kube-apiserver-multinode-585561             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-controller-manager-multinode-585561    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-wxrvx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-scheduler-multinode-585561             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m35s  kube-proxy       
	  Normal  Starting                 3m48s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s  kubelet          Node multinode-585561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s  kubelet          Node multinode-585561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s  kubelet          Node multinode-585561 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m48s  kubelet          Node multinode-585561 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m38s  kubelet          Node multinode-585561 status is now: NodeReady
	  Normal  RegisteredNode           3m36s  node-controller  Node multinode-585561 event: Registered Node multinode-585561 in Controller
	
	
	Name:               multinode-585561-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-585561-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Jan 2023 17:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-585561-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Jan 2023 17:49:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Jan 2023 17:47:04 +0000   Tue, 24 Jan 2023 17:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Jan 2023 17:47:04 +0000   Tue, 24 Jan 2023 17:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Jan 2023 17:47:04 +0000   Tue, 24 Jan 2023 17:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Jan 2023 17:47:04 +0000   Tue, 24 Jan 2023 17:46:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-585561-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 11af74b3a18d4d7295d17813eccf6dd7
	  System UUID:                29f7dbff-788f-4a96-9540-e33f700d45ce
	  Boot ID:                    202c095e-d1d4-4b92-9c9d-a08c9f26c94d
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.22
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-c86kc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-j5zlg               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m13s
	  kube-system                 kube-proxy-txqvw            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m10s                  kube-proxy       
	  Normal  Starting                 3m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m13s)  kubelet          Node multinode-585561-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m13s)  kubelet          Node multinode-585561-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m13s)  kubelet          Node multinode-585561-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m12s                  kubelet          Node multinode-585561-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m11s                  node-controller  Node multinode-585561-m02 event: Registered Node multinode-585561-m02 in Controller
	
	
	Name:               multinode-585561-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-585561-m03
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Jan 2023 17:47:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-585561-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Jan 2023 17:49:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Jan 2023 17:47:36 +0000   Tue, 24 Jan 2023 17:47:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Jan 2023 17:47:36 +0000   Tue, 24 Jan 2023 17:47:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Jan 2023 17:47:36 +0000   Tue, 24 Jan 2023 17:47:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Jan 2023 17:47:36 +0000   Tue, 24 Jan 2023 17:47:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-585561-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 11af74b3a18d4d7295d17813eccf6dd7
	  System UUID:                01ee951f-caa0-4cd4-aba5-a87993504d5a
	  Boot ID:                    202c095e-d1d4-4b92-9c9d-a08c9f26c94d
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.22
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hscwc       100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m47s
	  kube-system                 kube-proxy-z965l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m44s                  kube-proxy  
	  Normal  Starting                 2m6s                   kube-proxy  
	  Normal  NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet     Node multinode-585561-m03 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m47s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet     Node multinode-585561-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet     Node multinode-585561-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                2m46s                  kubelet     Node multinode-585561-m03 status is now: NodeReady
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m20s (x7 over 2m27s)  kubelet     Node multinode-585561-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x7 over 2m27s)  kubelet     Node multinode-585561-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m27s)  kubelet     Node multinode-585561-m03 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.007965] FS-Cache: N-cookie d=00000000479796fa{9p.inode} n=000000003d13ee1d
	[  +0.008725] FS-Cache: N-key=[8] '89a00f0200000000'
	[  +3.146479] FS-Cache: Duplicate cookie detected
	[  +0.004688] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006752] FS-Cache: O-cookie d=00000000479796fa{9p.inode} n=000000000a33edc1
	[  +0.008071] FS-Cache: O-key=[8] '88a00f0200000000'
	[  +0.004938] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006603] FS-Cache: N-cookie d=00000000479796fa{9p.inode} n=000000001a93f6e1
	[  +0.007465] FS-Cache: N-key=[8] '88a00f0200000000'
	[  +0.404647] FS-Cache: Duplicate cookie detected
	[  +0.004698] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006786] FS-Cache: O-cookie d=00000000479796fa{9p.inode} n=000000008a43e3d4
	[  +0.007636] FS-Cache: O-key=[8] '99a00f0200000000'
	[  +0.004975] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006629] FS-Cache: N-cookie d=00000000479796fa{9p.inode} n=00000000846a6a20
	[  +0.008738] FS-Cache: N-key=[8] '99a00f0200000000'
	[Jan24 17:36] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 7b ab a7 58 38 08 06
	[Jan24 17:37] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan24 17:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ce 8c ad 7a 7e 08 06
	[  +0.130814] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 69 bd a0 78 14 08 06
	[Jan24 17:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 15 94 bf f7 0e 08 06
	
	* 
	* ==> etcd [a8a00c2b5f80] <==
	* {"level":"info","ts":"2023-01-24T17:45:52.837Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-585561 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T17:45:53.757Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-01-24T17:45:53.757Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-24T17:46:26.858Z","caller":"traceutil/trace.go:171","msg":"trace[1820907400] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"124.887031ms","start":"2023-01-24T17:46:26.734Z","end":"2023-01-24T17:46:26.858Z","steps":["trace[1820907400] 'process raft request'  (duration: 60.351066ms)","trace[1820907400] 'compare'  (duration: 64.417937ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  17:49:46 up 32 min,  0 users,  load average: 0.26, 0.88, 0.90
	Linux multinode-585561 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [8db5094d208b] <==
	* I0124 17:45:55.258392       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0124 17:45:55.258402       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0124 17:45:55.258922       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0124 17:45:55.259321       1 shared_informer.go:280] Caches are synced for configmaps
	I0124 17:45:55.259622       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0124 17:45:55.259640       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0124 17:45:55.261852       1 controller.go:615] quota admission added evaluator for: namespaces
	I0124 17:45:55.278929       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0124 17:45:55.280628       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0124 17:45:55.945704       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0124 17:45:56.162820       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0124 17:45:56.166758       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0124 17:45:56.166774       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0124 17:45:56.558863       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0124 17:45:56.593191       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0124 17:45:56.694499       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0124 17:45:56.702819       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0124 17:45:56.703724       1 controller.go:615] quota admission added evaluator for: endpoints
	I0124 17:45:56.707696       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0124 17:45:57.189040       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0124 17:45:57.989573       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0124 17:45:58.001116       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0124 17:45:58.008285       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0124 17:46:10.496719       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0124 17:46:10.899399       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [8d7a8a4801df] <==
	* I0124 17:46:11.056003       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-lfdwf"
	I0124 17:46:11.276011       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0124 17:46:11.287227       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-5748b"
	W0124 17:46:33.909489       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-585561-m02" does not exist
	I0124 17:46:33.916264       1 range_allocator.go:372] Set node multinode-585561-m02 PodCIDR to [10.244.1.0/24]
	I0124 17:46:33.919965       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j5zlg"
	I0124 17:46:33.922734       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-txqvw"
	W0124 17:46:34.524624       1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
	W0124 17:46:35.246003       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-585561-m02. Assuming now as a timestamp.
	I0124 17:46:35.246031       1 event.go:294] "Event occurred" object="multinode-585561-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-585561-m02 event: Registered Node multinode-585561-m02 in Controller"
	I0124 17:46:38.426356       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0124 17:46:38.434633       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-c86kc"
	I0124 17:46:38.440168       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-7rp7j"
	W0124 17:46:59.700288       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-585561-m03" does not exist
	W0124 17:46:59.700335       1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
	I0124 17:46:59.710975       1 range_allocator.go:372] Set node multinode-585561-m03 PodCIDR to [10.244.2.0/24]
	I0124 17:46:59.711802       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hscwc"
	I0124 17:46:59.711827       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z965l"
	W0124 17:47:00.250029       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-585561-m03. Assuming now as a timestamp.
	I0124 17:47:00.250048       1 event.go:294] "Event occurred" object="multinode-585561-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-585561-m03 event: Registered Node multinode-585561-m03 in Controller"
	W0124 17:47:00.315034       1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
	W0124 17:47:26.171868       1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
	W0124 17:47:26.217085       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-585561-m03" does not exist
	W0124 17:47:26.217158       1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
	I0124 17:47:26.224446       1 range_allocator.go:372] Set node multinode-585561-m03 PodCIDR to [10.244.3.0/24]
	
	* 
	* ==> kube-proxy [7e5eddf7c5d5] <==
	* I0124 17:46:11.137588       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0124 17:46:11.137784       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0124 17:46:11.138018       1 server_others.go:535] "Using iptables proxy"
	I0124 17:46:11.161726       1 server_others.go:176] "Using iptables Proxier"
	I0124 17:46:11.161766       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0124 17:46:11.161776       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0124 17:46:11.161801       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0124 17:46:11.161835       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0124 17:46:11.162766       1 server.go:655] "Version info" version="v1.26.1"
	I0124 17:46:11.162785       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0124 17:46:11.163311       1 config.go:317] "Starting service config controller"
	I0124 17:46:11.163332       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0124 17:46:11.163349       1 config.go:226] "Starting endpoint slice config controller"
	I0124 17:46:11.163353       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0124 17:46:11.163767       1 config.go:444] "Starting node config controller"
	I0124 17:46:11.164106       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0124 17:46:11.264221       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0124 17:46:11.264284       1 shared_informer.go:280] Caches are synced for service config
	I0124 17:46:11.264708       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8af55922f6ee] <==
	* W0124 17:45:55.253090       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0124 17:45:55.253427       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0124 17:45:55.253431       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0124 17:45:55.253446       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0124 17:45:55.253431       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0124 17:45:55.253458       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0124 17:45:55.253460       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0124 17:45:55.253185       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0124 17:45:55.253475       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0124 17:45:55.253477       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0124 17:45:55.253336       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0124 17:45:55.253508       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0124 17:45:55.253363       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0124 17:45:55.253528       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0124 17:45:55.253101       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0124 17:45:55.253547       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0124 17:45:55.253273       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0124 17:45:55.253563       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0124 17:45:56.158168       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0124 17:45:56.158207       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0124 17:45:56.406846       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0124 17:45:56.406880       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0124 17:45:56.415884       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0124 17:45:56.415914       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0124 17:45:59.352147       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-01-24 17:45:29 UTC, end at Tue 2023-01-24 17:49:47 UTC. --
	Jan 24 17:46:15 multinode-585561 kubelet[2886]: I0124 17:46:15.958660    2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d694add5bf5369c664ffa57535f4fd192341b356eb4489ace3841139b339b6f"
	Jan 24 17:46:16 multinode-585561 kubelet[2886]: I0124 17:46:16.164014    2886 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=eec968db-c6da-4e2a-a20f-de7ed82a64cf path="/var/lib/kubelet/pods/eec968db-c6da-4e2a-a20f-de7ed82a64cf/volumes"
	Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.229855    2886 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\""
	Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.229926    2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
	Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.229950    2886 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
	Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.230012    2886 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\\\" network for pod \\\"coredns-787d4945fb-lfdwf\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-lfdwf_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-lfdwf" podUID=3ad6d110-548d-4cec-bae8-945a1e7d7853
	Jan 24 17:46:16 multinode-585561 kubelet[2886]: I0124 17:46:16.976003    2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14"
	Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263039    2886 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\""
	Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263103    2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
	Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263125    2886 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
	Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263189    2886 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\\\" network for pod \\\"coredns-787d4945fb-lfdwf\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-lfdwf_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-lfdwf" podUID=3ad6d110-548d-4cec-bae8-945a1e7d7853
	Jan 24 17:46:17 multinode-585561 kubelet[2886]: I0124 17:46:17.991309    2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904"
	Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284372    2886 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\""
	Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284448    2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
	Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284473    2886 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
	Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284608    2886 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\\\" network for pod \\\"coredns-787d4945fb-lfdwf\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-lfdwf_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-lfdwf" podUID=3ad6d110-548d-4cec-bae8-945a1e7d7853
	Jan 24 17:46:18 multinode-585561 kubelet[2886]: I0124 17:46:18.650642    2886 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 24 17:46:18 multinode-585561 kubelet[2886]: I0124 17:46:18.651245    2886 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 24 17:46:19 multinode-585561 kubelet[2886]: I0124 17:46:19.005307    2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c"
	Jan 24 17:46:20 multinode-585561 kubelet[2886]: I0124 17:46:20.037141    2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4zggw" podStartSLOduration=-9.223372026817673e+09 pod.CreationTimestamp="2023-01-24 17:46:10 +0000 UTC" firstStartedPulling="2023-01-24 17:46:11.745469717 +0000 UTC m=+13.777976029" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 17:46:15.975027099 +0000 UTC m=+18.007533417" watchObservedRunningTime="2023-01-24 17:46:20.037103988 +0000 UTC m=+22.069610305"
	Jan 24 17:46:20 multinode-585561 kubelet[2886]: I0124 17:46:20.037338    2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-lfdwf" podStartSLOduration=9.037301283 pod.CreationTimestamp="2023-01-24 17:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 17:46:20.036544143 +0000 UTC m=+22.069050461" watchObservedRunningTime="2023-01-24 17:46:20.037301283 +0000 UTC m=+22.069807600"
	Jan 24 17:46:38 multinode-585561 kubelet[2886]: I0124 17:46:38.447996    2886 topology_manager.go:210] "Topology Admit Handler"
	Jan 24 17:46:38 multinode-585561 kubelet[2886]: I0124 17:46:38.531385    2886 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tddk\" (UniqueName: \"kubernetes.io/projected/26cc4840-317c-472e-99bb-629a04d62105-kube-api-access-6tddk\") pod \"busybox-6b86dd6d48-7rp7j\" (UID: \"26cc4840-317c-472e-99bb-629a04d62105\") " pod="default/busybox-6b86dd6d48-7rp7j"
	Jan 24 17:46:39 multinode-585561 kubelet[2886]: I0124 17:46:39.192554    2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124081a92701436a43375e12a1c1bb790b0520e24efaf1d970e27208432e8dae"
	Jan 24 17:46:41 multinode-585561 kubelet[2886]: I0124 17:46:41.225686    2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-7rp7j" podStartSLOduration=-9.223372033629145e+09 pod.CreationTimestamp="2023-01-24 17:46:38 +0000 UTC" firstStartedPulling="2023-01-24 17:46:39.21404037 +0000 UTC m=+41.246546687" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 17:46:41.225554296 +0000 UTC m=+43.258060614" watchObservedRunningTime="2023-01-24 17:46:41.225631061 +0000 UTC m=+43.258137379"
	
	* 
	* ==> storage-provisioner [1f47880f5c35] <==
	* I0124 17:46:13.225870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0124 17:46:13.232899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0124 17:46:13.232952       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0124 17:46:13.240949       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0124 17:46:13.241098       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-585561_c64320d4-82d1-43e3-ada2-aae090f594fc!
	I0124 17:46:13.241061       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0ca7298-edcb-4d68-ade8-a30e9888ab9a", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-585561_c64320d4-82d1-43e3-ada2-aae090f594fc became leader
	I0124 17:46:13.341876       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-585561_c64320d4-82d1-43e3-ada2-aae090f594fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-585561 -n multinode-585561
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-585561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (149.19s)

                                                
                                    

Test pass (288/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 26.88
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.26.1/json-events 15.03
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.27
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
18 TestDownloadOnlyKic 5.04
19 TestBinaryMirror 0.87
20 TestOffline 78.77
22 TestAddons/Setup 120.47
24 TestAddons/parallel/Registry 19.75
25 TestAddons/parallel/Ingress 21.1
26 TestAddons/parallel/MetricsServer 5.69
27 TestAddons/parallel/HelmTiller 12.84
29 TestAddons/parallel/CSI 47.29
30 TestAddons/parallel/Headlamp 10.05
31 TestAddons/parallel/CloudSpanner 5.33
34 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/StoppedEnableDisable 11.1
36 TestCertOptions 44.36
37 TestCertExpiration 262.03
38 TestDockerFlags 46.8
39 TestForceSystemdFlag 62.54
40 TestForceSystemdEnv 43.92
41 TestKVMDriverInstallOrUpdate 7.3
45 TestErrorSpam/setup 37.53
46 TestErrorSpam/start 0.96
47 TestErrorSpam/status 1.11
48 TestErrorSpam/pause 1.41
49 TestErrorSpam/unpause 1.45
50 TestErrorSpam/stop 2.17
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 54.03
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 43.28
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
62 TestFunctional/serial/CacheCmd/cache/add_local 1.47
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
67 TestFunctional/serial/CacheCmd/cache/delete 0.15
68 TestFunctional/serial/MinikubeKubectlCmd 0.13
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
70 TestFunctional/serial/ExtraConfig 43.04
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.17
73 TestFunctional/serial/LogsFileCmd 1.2
75 TestFunctional/parallel/ConfigCmd 0.49
76 TestFunctional/parallel/DashboardCmd 13.42
77 TestFunctional/parallel/DryRun 0.76
78 TestFunctional/parallel/InternationalLanguage 0.23
79 TestFunctional/parallel/StatusCmd 1.21
82 TestFunctional/parallel/ServiceCmd 12.33
83 TestFunctional/parallel/ServiceCmdConnect 8.89
84 TestFunctional/parallel/AddonsCmd 0.2
85 TestFunctional/parallel/PersistentVolumeClaim 33.72
87 TestFunctional/parallel/SSHCmd 0.94
88 TestFunctional/parallel/CpCmd 1.58
89 TestFunctional/parallel/MySQL 31.51
90 TestFunctional/parallel/FileSync 0.42
91 TestFunctional/parallel/CertSync 2.33
95 TestFunctional/parallel/NodeLabels 0.1
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
99 TestFunctional/parallel/License 0.24
100 TestFunctional/parallel/Version/short 0.07
101 TestFunctional/parallel/Version/components 0.8
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
106 TestFunctional/parallel/ImageCommands/ImageBuild 2.75
107 TestFunctional/parallel/ImageCommands/Setup 1.83
108 TestFunctional/parallel/DockerEnv/bash 1.49
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.11
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.44
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.85
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
116 TestFunctional/parallel/ProfileCmd/profile_list 0.5
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
118 TestFunctional/parallel/MountCmd/any-port 15.4
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.14
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.3
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.05
123 TestFunctional/parallel/MountCmd/specific-port 2.79
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.5
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_addon-resizer_images 0.08
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
140 TestImageBuild/serial/NormalBuild 2.33
141 TestImageBuild/serial/BuildWithBuildArg 1.02
142 TestImageBuild/serial/BuildWithDockerIgnore 0.41
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.33
146 TestIngressAddonLegacy/StartLegacyK8sCluster 65.02
148 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.14
149 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.39
150 TestIngressAddonLegacy/serial/ValidateIngressAddons 49.57
153 TestJSONOutput/start/Command 55.89
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.62
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.52
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 5.87
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.28
178 TestKicCustomNetwork/create_custom_network 39.53
179 TestKicCustomNetwork/use_default_bridge_network 42.64
180 TestKicExistingNetwork 39.65
181 TestKicCustomSubnet 39.29
182 TestKicStaticIP 42.11
183 TestMainNoArgs 0.07
184 TestMinikubeProfile 79.17
187 TestMountStart/serial/StartWithMountFirst 6.7
188 TestMountStart/serial/VerifyMountFirst 0.34
189 TestMountStart/serial/StartWithMountSecond 6.18
190 TestMountStart/serial/VerifyMountSecond 0.33
191 TestMountStart/serial/DeleteFirst 1.58
192 TestMountStart/serial/VerifyMountPostDelete 0.33
193 TestMountStart/serial/Stop 1.24
194 TestMountStart/serial/RestartStopped 7.42
195 TestMountStart/serial/VerifyMountPostStop 0.33
198 TestMultiNode/serial/FreshStart2Nodes 75.46
199 TestMultiNode/serial/DeployApp2Nodes 8.1
200 TestMultiNode/serial/PingHostFrom2Pods 0.94
201 TestMultiNode/serial/AddNode 16.64
202 TestMultiNode/serial/ProfileList 0.4
203 TestMultiNode/serial/CopyFile 11.94
204 TestMultiNode/serial/StopNode 2.41
206 TestMultiNode/serial/RestartKeepsNodes 93.17
207 TestMultiNode/serial/DeleteNode 5.01
208 TestMultiNode/serial/StopMultiNode 21.66
209 TestMultiNode/serial/RestartMultiNode 58.59
210 TestMultiNode/serial/ValidateNameConflict 39.42
215 TestPreload 126.83
217 TestScheduledStopUnix 111.51
218 TestSkaffold 71.97
220 TestInsufficientStorage 11.82
221 TestRunningBinaryUpgrade 100.28
223 TestKubernetesUpgrade 415.62
224 TestMissingContainerUpgrade 112.91
226 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
227 TestNoKubernetes/serial/StartWithK8s 61.3
228 TestNoKubernetes/serial/StartWithStopK8s 17.18
229 TestNoKubernetes/serial/Start 10.61
230 TestNoKubernetes/serial/VerifyK8sNotRunning 0.49
231 TestNoKubernetes/serial/ProfileList 1.49
232 TestNoKubernetes/serial/Stop 1.29
233 TestNoKubernetes/serial/StartNoArgs 9.74
234 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
235 TestStoppedBinaryUpgrade/Setup 1.93
236 TestStoppedBinaryUpgrade/Upgrade 107.73
256 TestPause/serial/Start 57.4
257 TestStoppedBinaryUpgrade/MinikubeLogs 2.9
258 TestPause/serial/SecondStartNoReconfiguration 46.3
259 TestNetworkPlugins/group/auto/Start 105.34
260 TestPause/serial/Pause 0.7
261 TestPause/serial/VerifyStatus 0.45
262 TestPause/serial/Unpause 0.67
263 TestPause/serial/PauseAgain 0.88
264 TestPause/serial/DeletePaused 2.36
265 TestPause/serial/VerifyDeletedResources 0.8
266 TestNetworkPlugins/group/kindnet/Start 65.83
267 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
268 TestNetworkPlugins/group/calico/Start 83.73
269 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
270 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
271 TestNetworkPlugins/group/kindnet/DNS 0.16
272 TestNetworkPlugins/group/kindnet/Localhost 0.15
273 TestNetworkPlugins/group/kindnet/HairPin 0.18
274 TestNetworkPlugins/group/auto/KubeletFlags 0.39
275 TestNetworkPlugins/group/auto/NetCatPod 11.28
276 TestNetworkPlugins/group/auto/DNS 0.17
277 TestNetworkPlugins/group/auto/Localhost 0.15
278 TestNetworkPlugins/group/auto/HairPin 0.18
279 TestNetworkPlugins/group/custom-flannel/Start 74.28
280 TestNetworkPlugins/group/false/Start 55.98
281 TestNetworkPlugins/group/calico/ControllerPod 5.02
282 TestNetworkPlugins/group/calico/KubeletFlags 0.38
283 TestNetworkPlugins/group/calico/NetCatPod 11.33
284 TestNetworkPlugins/group/calico/DNS 0.21
285 TestNetworkPlugins/group/calico/Localhost 0.14
286 TestNetworkPlugins/group/calico/HairPin 0.16
287 TestNetworkPlugins/group/bridge/Start 67.51
288 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.51
289 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.31
290 TestNetworkPlugins/group/false/KubeletFlags 0.41
291 TestNetworkPlugins/group/false/NetCatPod 15.27
292 TestNetworkPlugins/group/custom-flannel/DNS 0.17
293 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
294 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
295 TestNetworkPlugins/group/kubenet/Start 95.49
296 TestNetworkPlugins/group/false/DNS 0.19
297 TestNetworkPlugins/group/false/Localhost 0.18
298 TestNetworkPlugins/group/false/HairPin 0.18
299 TestNetworkPlugins/group/flannel/Start 71.58
300 TestNetworkPlugins/group/enable-default-cni/Start 54.95
301 TestNetworkPlugins/group/bridge/KubeletFlags 0.49
302 TestNetworkPlugins/group/bridge/NetCatPod 17.44
303 TestNetworkPlugins/group/bridge/DNS 0.19
304 TestNetworkPlugins/group/bridge/Localhost 0.16
305 TestNetworkPlugins/group/bridge/HairPin 0.17
307 TestStartStop/group/old-k8s-version/serial/FirstStart 113.91
308 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.23
310 TestNetworkPlugins/group/flannel/ControllerPod 5.02
311 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
312 TestNetworkPlugins/group/kubenet/NetCatPod 10.27
313 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
314 TestNetworkPlugins/group/flannel/NetCatPod 9.22
315 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
316 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
318 TestNetworkPlugins/group/kubenet/DNS 0.16
319 TestNetworkPlugins/group/kubenet/Localhost 0.18
320 TestNetworkPlugins/group/kubenet/HairPin 0.15
321 TestNetworkPlugins/group/flannel/DNS 0.19
322 TestNetworkPlugins/group/flannel/Localhost 0.18
323 TestNetworkPlugins/group/flannel/HairPin 0.16
325 TestStartStop/group/no-preload/serial/FirstStart 58.74
327 TestStartStop/group/embed-certs/serial/FirstStart 58.8
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.28
330 TestStartStop/group/no-preload/serial/DeployApp 8.39
331 TestStartStop/group/embed-certs/serial/DeployApp 9.31
332 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.72
333 TestStartStop/group/no-preload/serial/Stop 11.06
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
335 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.7
336 TestStartStop/group/embed-certs/serial/Stop 11.09
337 TestStartStop/group/old-k8s-version/serial/DeployApp 8.37
338 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
339 TestStartStop/group/no-preload/serial/SecondStart 561.08
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.75
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.83
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.72
343 TestStartStop/group/old-k8s-version/serial/Stop 10.92
344 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
345 TestStartStop/group/embed-certs/serial/SecondStart 559.3
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
347 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 562.68
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
349 TestStartStop/group/old-k8s-version/serial/SecondStart 339.07
350 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
351 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
352 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.4
353 TestStartStop/group/old-k8s-version/serial/Pause 3.12
355 TestStartStop/group/newest-cni/serial/FirstStart 50.98
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
358 TestStartStop/group/newest-cni/serial/Stop 10.63
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
360 TestStartStop/group/newest-cni/serial/SecondStart 27.95
361 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
364 TestStartStop/group/newest-cni/serial/Pause 3.09
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.41
370 TestStartStop/group/no-preload/serial/Pause 3.05
371 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.47
373 TestStartStop/group/embed-certs/serial/Pause 3.24
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.94
x
+
TestDownloadOnly/v1.16.0/json-events (26.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-251448 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-251448 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (26.880992129s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (26.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-251448
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-251448: exit status 85 (88.940479ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-251448 | jenkins | v1.28.0 | 24 Jan 23 17:28 UTC |          |
	|         | -p download-only-251448        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 17:28:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 17:28:03.692916   10138 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:28:03.693009   10138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:28:03.693014   10138 out.go:309] Setting ErrFile to fd 2...
	I0124 17:28:03.693019   10138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:28:03.693123   10138 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	W0124 17:28:03.693233   10138 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3637/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3637/.minikube/config/config.json: no such file or directory
	I0124 17:28:03.693798   10138 out.go:303] Setting JSON to true
	I0124 17:28:03.694569   10138 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":628,"bootTime":1674580656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0124 17:28:03.694627   10138 start.go:135] virtualization: kvm guest
	I0124 17:28:03.697582   10138 out.go:97] [download-only-251448] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	W0124 17:28:03.697681   10138 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball: no such file or directory
	I0124 17:28:03.697740   10138 notify.go:220] Checking for updates...
	I0124 17:28:03.699424   10138 out.go:169] MINIKUBE_LOCATION=15565
	I0124 17:28:03.701117   10138 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 17:28:03.702674   10138 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:28:03.704164   10138 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	I0124 17:28:03.705608   10138 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0124 17:28:03.708213   10138 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0124 17:28:03.708365   10138 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 17:28:03.734669   10138 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0124 17:28:03.734800   10138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:28:04.610046   10138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:28:03.753178431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:28:04.610143   10138 docker.go:282] overlay module found
	I0124 17:28:04.612260   10138 out.go:97] Using the docker driver based on user configuration
	I0124 17:28:04.612287   10138 start.go:296] selected driver: docker
	I0124 17:28:04.612300   10138 start.go:840] validating driver "docker" against <nil>
	I0124 17:28:04.612383   10138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:28:04.712158   10138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:28:04.630400091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:28:04.712286   10138 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0124 17:28:04.712840   10138 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0124 17:28:04.712965   10138 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0124 17:28:04.715151   10138 out.go:169] Using Docker driver with root privileges
	I0124 17:28:04.717702   10138 cni.go:84] Creating CNI manager for ""
	I0124 17:28:04.717730   10138 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 17:28:04.717740   10138 start_flags.go:319] config:
	{Name:download-only-251448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-251448 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:28:04.719816   10138 out.go:97] Starting control plane node download-only-251448 in cluster download-only-251448
	I0124 17:28:04.719840   10138 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 17:28:04.721350   10138 out.go:97] Pulling base image ...
	I0124 17:28:04.721373   10138 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 17:28:04.721428   10138 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 17:28:04.742787   10138 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0124 17:28:04.742933   10138 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local cache directory
	I0124 17:28:04.743019   10138 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0124 17:28:04.820091   10138 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0124 17:28:04.820120   10138 cache.go:57] Caching tarball of preloaded images
	I0124 17:28:04.820272   10138 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 17:28:04.822910   10138 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0124 17:28:04.822934   10138 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0124 17:28:04.926354   10138 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0124 17:28:18.526462   10138 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0124 17:28:18.526554   10138 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0124 17:28:19.248058   10138 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0124 17:28:19.248359   10138 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/download-only-251448/config.json ...
	I0124 17:28:19.248384   10138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/download-only-251448/config.json: {Name:mk8476d53ce6478d3cdf4bc1e4c524db6327a7cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 17:28:19.248559   10138 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 17:28:19.248721   10138 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15565-3637/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-251448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (15.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-251448 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-251448 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (15.033648593s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (15.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-251448
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-251448: exit status 85 (86.7796ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-251448 | jenkins | v1.28.0 | 24 Jan 23 17:28 UTC |          |
	|         | -p download-only-251448        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-251448 | jenkins | v1.28.0 | 24 Jan 23 17:28 UTC |          |
	|         | -p download-only-251448        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 17:28:30
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 17:28:30.665166   10301 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:28:30.665331   10301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:28:30.665339   10301 out.go:309] Setting ErrFile to fd 2...
	I0124 17:28:30.665345   10301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:28:30.665455   10301 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	W0124 17:28:30.665561   10301 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3637/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3637/.minikube/config/config.json: no such file or directory
	I0124 17:28:30.665975   10301 out.go:303] Setting JSON to true
	I0124 17:28:30.666726   10301 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":655,"bootTime":1674580656,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0124 17:28:30.666791   10301 start.go:135] virtualization: kvm guest
	I0124 17:28:30.669323   10301 out.go:97] [download-only-251448] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0124 17:28:30.669408   10301 notify.go:220] Checking for updates...
	I0124 17:28:30.671036   10301 out.go:169] MINIKUBE_LOCATION=15565
	I0124 17:28:30.672951   10301 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 17:28:30.674456   10301 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:28:30.675873   10301 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	I0124 17:28:30.677321   10301 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0124 17:28:30.680218   10301 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0124 17:28:30.680648   10301 config.go:180] Loaded profile config "download-only-251448": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0124 17:28:30.680698   10301 start.go:748] api.Load failed for download-only-251448: filestore "download-only-251448": Docker machine "download-only-251448" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0124 17:28:30.680760   10301 driver.go:365] Setting default libvirt URI to qemu:///system
	W0124 17:28:30.680790   10301 start.go:748] api.Load failed for download-only-251448: filestore "download-only-251448": Docker machine "download-only-251448" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0124 17:28:30.705889   10301 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0124 17:28:30.705952   10301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:28:30.804937   10301 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:28:30.723934711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:28:30.805029   10301 docker.go:282] overlay module found
	I0124 17:28:30.807454   10301 out.go:97] Using the docker driver based on existing profile
	I0124 17:28:30.807483   10301 start.go:296] selected driver: docker
	I0124 17:28:30.807499   10301 start.go:840] validating driver "docker" against &{Name:download-only-251448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-251448 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:28:30.807625   10301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:28:30.904199   10301 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:28:30.825498656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:28:30.904787   10301 cni.go:84] Creating CNI manager for ""
	I0124 17:28:30.904804   10301 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 17:28:30.904813   10301 start_flags.go:319] config:
	{Name:download-only-251448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-251448 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet
StaticIP:}
	I0124 17:28:30.907192   10301 out.go:97] Starting control plane node download-only-251448 in cluster download-only-251448
	I0124 17:28:30.907221   10301 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 17:28:30.908977   10301 out.go:97] Pulling base image ...
	I0124 17:28:30.909004   10301 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:28:30.909128   10301 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 17:28:30.929280   10301 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0124 17:28:30.929409   10301 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local cache directory
	I0124 17:28:30.929426   10301 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local cache directory, skipping pull
	I0124 17:28:30.929431   10301 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in cache, skipping pull
	I0124 17:28:30.929440   10301 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a as a tarball
	I0124 17:28:31.197414   10301 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 17:28:31.197437   10301 cache.go:57] Caching tarball of preloaded images
	I0124 17:28:31.197590   10301 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 17:28:31.200244   10301 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0124 17:28:31.200266   10301 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0124 17:28:31.303910   10301 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:44c239b3385ae5d04aaa293b94f853d9 -> /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-251448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-251448
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnlyKic (5.04s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-432472 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-432472 --force --alsologtostderr --driver=docker  --container-runtime=docker: (3.931630924s)
helpers_test.go:175: Cleaning up "download-docker-432472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-432472
--- PASS: TestDownloadOnlyKic (5.04s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-109243 --alsologtostderr --binary-mirror http://127.0.0.1:37443 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-109243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-109243
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (78.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-259100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-259100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m16.37171741s)
helpers_test.go:175: Cleaning up "offline-docker-259100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-259100

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-259100: (2.39907705s)
--- PASS: TestOffline (78.77s)

                                                
                                    
x
+
TestAddons/Setup (120.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-573842 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-573842 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m0.471637492s)
--- PASS: TestAddons/Setup (120.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 9.864369ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-wcb4s" [a249582e-d362-4444-bc3c-aa6dabe1f5df] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008211487s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qr4n8" [d293acfb-8fc3-4bcb-8e7d-d98ebefa0e51] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008183256s
addons_test.go:305: (dbg) Run:  kubectl --context addons-573842 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Run:  kubectl --context addons-573842 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-573842 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.643671013s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 ip
2023/01/24 17:31:11 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-573842 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-573842 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-573842 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9208f33f-7c2f-462b-9714-9ccd0359037f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [9208f33f-7c2f-462b-9714-9ccd0359037f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009686023s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-573842 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-573842 addons disable ingress-dns --alsologtostderr -v=1: (1.433621719s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-573842 addons disable ingress --alsologtostderr -v=1: (7.581989208s)
--- PASS: TestAddons/parallel/Ingress (21.10s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.144103ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-kfqsn" [ace8ffcc-1564-437f-b57f-59146a3960d9] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009403531s
addons_test.go:380: (dbg) Run:  kubectl --context addons-573842 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.84s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 1.972503ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-pzm4z" [0f6444a0-8db1-420b-8d63-f4d25492bc19] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008319961s
addons_test.go:438: (dbg) Run:  kubectl --context addons-573842 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-573842 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.411242971s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 11.260582ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-573842 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573842 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573842 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-573842 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1d405c91-54cd-4578-a2fc-858692cd6d65] Pending
helpers_test.go:344: "task-pv-pod" [1d405c91-54cd-4578-a2fc-858692cd6d65] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [1d405c91-54cd-4578-a2fc-858692cd6d65] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.006702151s
addons_test.go:549: (dbg) Run:  kubectl --context addons-573842 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-573842 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-573842 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-573842 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-573842 delete pod task-pv-pod: (1.315167351s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-573842 delete pvc hpvc

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:571: (dbg) Run:  kubectl --context addons-573842 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-573842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-573842 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d5d818e6-1dfd-4ed9-ada9-5db68255df5f] Pending
helpers_test.go:344: "task-pv-pod-restore" [d5d818e6-1dfd-4ed9-ada9-5db68255df5f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [d5d818e6-1dfd-4ed9-ada9-5db68255df5f] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.006876035s
addons_test.go:591: (dbg) Run:  kubectl --context addons-573842 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-573842 delete pod task-pv-pod-restore: (1.087837265s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-573842 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-573842 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-573842 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.990171388s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-573842 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-573842 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-573842 --alsologtostderr -v=1: (1.042261393s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-6v5vp" [e000b9df-e129-4a85-85ef-f9b48ee51a3e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-6v5vp" [e000b9df-e129-4a85-85ef-f9b48ee51a3e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-6v5vp" [e000b9df-e129-4a85-85ef-f9b48ee51a3e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.006431914s
--- PASS: TestAddons/parallel/Headlamp (10.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:344: "cloud-spanner-emulator-5dcf58dbbb-jc5m7" [98bc0b8a-c8ed-4d36-8655-15e311c74c61] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00602252s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-573842
--- PASS: TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-573842 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-573842 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-573842
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-573842: (10.892493551s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-573842
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-573842
--- PASS: TestAddons/StoppedEnableDisable (11.10s)

                                                
                                    
x
+
TestCertOptions (44.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-533928 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-533928 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (41.154340421s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-533928 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-533928 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-533928 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-533928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-533928
E0124 18:03:16.866465   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-533928: (2.406658248s)
--- PASS: TestCertOptions (44.36s)

                                                
                                    
x
+
TestCertExpiration (262.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-248093 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-248093 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (49.278210348s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-248093 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0124 18:04:51.565113   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-248093 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (30.443213053s)
helpers_test.go:175: Cleaning up "cert-expiration-248093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-248093

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-248093: (2.31106937s)
--- PASS: TestCertExpiration (262.03s)

                                                
                                    
x
+
TestDockerFlags (46.8s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-481056 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-481056 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.53407564s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-481056 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-481056 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-481056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-481056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-481056: (2.461729443s)
--- PASS: TestDockerFlags (46.80s)

                                                
                                    
x
+
TestForceSystemdFlag (62.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-287665 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-287665 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (59.402641635s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-287665 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-287665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-287665

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-287665: (2.508953592s)
--- PASS: TestForceSystemdFlag (62.54s)

                                                
                                    
x
+
TestForceSystemdEnv (43.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-161225 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-161225 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.23759381s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-161225 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-161225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-161225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-161225: (2.262248463s)
--- PASS: TestForceSystemdEnv (43.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.3s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0124 18:00:52.714666   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (7.30s)

                                                
                                    
x
+
TestErrorSpam/setup (37.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-550467 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-550467 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-550467 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-550467 --driver=docker  --container-runtime=docker: (37.525094244s)
--- PASS: TestErrorSpam/setup (37.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 start --dry-run
--- PASS: TestErrorSpam/start (0.96s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 stop: (1.909353236s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550467 --log_dir /tmp/nospam-550467 stop
--- PASS: TestErrorSpam/stop (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/test/nested/copy/10126/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470074 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-470074 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (54.028982092s)
--- PASS: TestFunctional/serial/StartWithProxy (54.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470074 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-470074 --alsologtostderr -v=8: (43.280146732s)
functional_test.go:656: soft start took 43.280785731s for "functional-470074" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-470074 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 cache add k8s.gcr.io/pause:3.1: (1.299875238s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 cache add k8s.gcr.io/pause:3.3: (1.316871599s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-470074 /tmp/TestFunctionalserialCacheCmdcacheadd_local2971940356/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cache add minikube-local-cache-test:functional-470074
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 cache add minikube-local-cache-test:functional-470074: (1.206437929s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cache delete minikube-local-cache-test:functional-470074
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-470074
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (359.555295ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cache reload
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 kubectl -- --context functional-470074 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-470074 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470074 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-470074 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.038789686s)
functional_test.go:754: restart took 43.03894214s for "functional-470074" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-470074 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 logs: (1.170679451s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 logs --file /tmp/TestFunctionalserialLogsFileCmd2888388451/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 logs --file /tmp/TestFunctionalserialLogsFileCmd2888388451/001/logs.txt: (1.199775433s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 config get cpus: exit status 14 (79.799189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 config get cpus: exit status 14 (84.173253ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-470074 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-470074 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:508: unable to kill pid 60943: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470074 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-470074 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (305.980692ms)

                                                
                                                
-- stdout --
	* [functional-470074] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 17:35:53.111498   60079 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:35:53.111689   60079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:35:53.111698   60079 out.go:309] Setting ErrFile to fd 2...
	I0124 17:35:53.111703   60079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:35:53.111851   60079 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	I0124 17:35:53.112467   60079 out.go:303] Setting JSON to false
	I0124 17:35:53.113870   60079 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1097,"bootTime":1674580656,"procs":591,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0124 17:35:53.113937   60079 start.go:135] virtualization: kvm guest
	I0124 17:35:53.116178   60079 out.go:177] * [functional-470074] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0124 17:35:53.117547   60079 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 17:35:53.117479   60079 notify.go:220] Checking for updates...
	I0124 17:35:53.120736   60079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 17:35:53.122408   60079 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:35:53.124551   60079 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	I0124 17:35:53.126079   60079 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0124 17:35:53.127594   60079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 17:35:53.129424   60079 config.go:180] Loaded profile config "functional-470074": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:35:53.129957   60079 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 17:35:53.162912   60079 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0124 17:35:53.163094   60079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:35:53.267797   60079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-24 17:35:53.184399311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:35:53.267893   60079 docker.go:282] overlay module found
	I0124 17:35:53.289367   60079 out.go:177] * Using the docker driver based on existing profile
	I0124 17:35:53.294813   60079 start.go:296] selected driver: docker
	I0124 17:35:53.294846   60079 start.go:840] validating driver "docker" against &{Name:functional-470074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-470074 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:35:53.294956   60079 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 17:35:53.307099   60079 out.go:177] 
	W0124 17:35:53.316778   60079 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0124 17:35:53.331081   60079 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470074 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0124 17:35:53.353130   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DryRun (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470074 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
E0124 17:35:53.993694   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-470074 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (232.188615ms)

                                                
                                                
-- stdout --
	* [functional-470074] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 17:35:53.869560   60383 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:35:53.869658   60383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:35:53.869666   60383 out.go:309] Setting ErrFile to fd 2...
	I0124 17:35:53.869671   60383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:35:53.869828   60383 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	I0124 17:35:53.870351   60383 out.go:303] Setting JSON to false
	I0124 17:35:53.871548   60383 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1098,"bootTime":1674580656,"procs":595,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0124 17:35:53.871614   60383 start.go:135] virtualization: kvm guest
	I0124 17:35:53.874212   60383 out.go:177] * [functional-470074] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I0124 17:35:53.875995   60383 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 17:35:53.875903   60383 notify.go:220] Checking for updates...
	I0124 17:35:53.878102   60383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 17:35:53.880006   60383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	I0124 17:35:53.881542   60383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	I0124 17:35:53.883103   60383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0124 17:35:53.884998   60383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 17:35:53.887149   60383 config.go:180] Loaded profile config "functional-470074": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:35:53.887905   60383 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 17:35:53.919036   60383 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
	I0124 17:35:53.919140   60383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:35:54.018598   60383 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-24 17:35:53.938612679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:35:54.018725   60383 docker.go:282] overlay module found
	I0124 17:35:54.021848   60383 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0124 17:35:54.023164   60383 start.go:296] selected driver: docker
	I0124 17:35:54.023189   60383 start.go:840] validating driver "docker" against &{Name:functional-470074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-470074 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 17:35:54.023299   60383 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 17:35:54.025539   60383 out.go:177] 
	W0124 17:35:54.026964   60383 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0124 17:35:54.028574   60383 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 status
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-470074 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-470074 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-jb7s4" [0a246408-8422-43a8-976f-1db8284a0330] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-jb7s4" [0a246408-8422-43a8-976f-1db8284a0330] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.007192796s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.49.2:32596
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 service hello-node --url
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:32596
--- PASS: TestFunctional/parallel/ServiceCmd (12.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-470074 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-470074 expose deployment hello-node-connect --type=NodePort --port=8080
E0124 17:36:02.956023   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-hps7c" [9d0e7471-45d3-4461-a8a4-234008521955] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:344: "hello-node-connect-5cf7cc858f-hps7c" [9d0e7471-45d3-4461-a8a4-234008521955] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006168628s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:31308
functional_test.go:1605: http://192.168.49.2:31308: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-hps7c

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31308
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
E0124 17:36:13.196637   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b6410987-c94e-439a-98eb-8b1bd84374d7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014472674s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-470074 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-470074 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-470074 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-470074 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6fec5085-b977-4a73-9f08-249a68e70c8f] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [6fec5085-b977-4a73-9f08-249a68e70c8f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [6fec5085-b977-4a73-9f08-249a68e70c8f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.029288686s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-470074 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-470074 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-470074 delete -f testdata/storage-provisioner/pod.yaml: (1.400378195s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-470074 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d7e1b89b-8f44-4545-af0c-e768749b06b4] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [d7e1b89b-8f44-4545-af0c-e768749b06b4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0124 17:35:52.714515   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
E0124 17:35:52.720215   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
E0124 17:35:52.730528   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
E0124 17:35:52.750768   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
E0124 17:35:52.791781   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
E0124 17:35:52.872056   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [d7e1b89b-8f44-4545-af0c-e768749b06b4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.01019092s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-470074 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh -n functional-470074 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 cp functional-470074:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd623281635/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh -n functional-470074 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-470074 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-mgmgx" [5f258bac-20a5-4a82-a86e-ae10e44bc2ba] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-mgmgx" [5f258bac-20a5-4a82-a86e-ae10e44bc2ba] Running
E0124 17:35:55.274380   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.010820776s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-470074 exec mysql-888f84dd9-mgmgx -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-470074 exec mysql-888f84dd9-mgmgx -- mysql -ppassword -e "show databases;": exit status 1 (300.47405ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-470074 exec mysql-888f84dd9-mgmgx -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-470074 exec mysql-888f84dd9-mgmgx -- mysql -ppassword -e "show databases;": exit status 1 (236.986905ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-470074 exec mysql-888f84dd9-mgmgx -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-470074 exec mysql-888f84dd9-mgmgx -- mysql -ppassword -e "show databases;": exit status 1 (167.982804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-470074 exec mysql-888f84dd9-mgmgx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10126/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo cat /etc/test/nested/copy/10126/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10126.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo cat /etc/ssl/certs/10126.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10126.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo cat /usr/share/ca-certificates/10126.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/101262.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo cat /etc/ssl/certs/101262.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/101262.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo cat /usr/share/ca-certificates/101262.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-470074 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 ssh "sudo systemctl is-active crio": exit status 1 (364.614525ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470074 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:v3.3.8-0-gke.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.4
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-470074
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-470074
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470074 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.9.4            | a81c2ec4e946d | 49.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-470074 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/etcd                        | v3.3.8-0-gke.1    | 2a575b86cb352 | 425MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-470074 | b2ba3a92e0bb2 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| docker.io/library/mysql                     | 5.7               | e982339a20a53 | 452MB  |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470074 image ls --format json:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b","repoDigests":[],"repoTags":["registry.k8s.io/etcd:v3.3.8-0-gke.1"],"size":"425000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77
f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"b2ba3a92e0bb23bf6beb72231a7dae69d6c17f71494c7e36314851ee8af1ae1a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-470074"],"size":"30"},{"id":"e982339a20a53052bd5f2b2e8438b3c95c91013f653ee781a67934cd1f9f9631","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"452000000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.4"],"size":"49800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6
"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-470074"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"fce
326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470074 image ls --format yaml:
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.4
size: "49800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: e982339a20a53052bd5f2b2e8438b3c95c91013f653ee781a67934cd1f9f9631
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "452000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-470074
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: b2ba3a92e0bb23bf6beb72231a7dae69d6c17f71494c7e36314851ee8af1ae1a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-470074
size: "30"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b
repoDigests: []
repoTags:
- registry.k8s.io/etcd:v3.3.8-0-gke.1
size: "425000000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 ssh pgrep buildkitd: exit status 1 (351.488212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image build -t localhost/my-image:functional-470074 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 image build -t localhost/my-image:functional-470074 testdata/build: (2.078138923s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470074 image build -t localhost/my-image:functional-470074 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d9499c678565
Removing intermediate container d9499c678565
---> 1d75fec7dd4a
Step 3/3 : ADD content.txt /
---> ef09dfaa8bda
Successfully built ef09dfaa8bda
Successfully tagged localhost/my-image:functional-470074
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.785134145s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-470074
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-470074 docker-env) && out/minikube-linux-amd64 status -p functional-470074"
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-470074 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image load --daemon gcr.io/google-containers/addon-resizer:functional-470074

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 image load --daemon gcr.io/google-containers/addon-resizer:functional-470074: (3.848966356s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 update-context --alsologtostderr -v=2
2023/01/24 17:36:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image load --daemon gcr.io/google-containers/addon-resizer:functional-470074

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 image load --daemon gcr.io/google-containers/addon-resizer:functional-470074: (2.195884303s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.065413771s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-470074
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image load --daemon gcr.io/google-containers/addon-resizer:functional-470074

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 image load --daemon gcr.io/google-containers/addon-resizer:functional-470074: (4.41808872s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "410.506659ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "87.438313ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "494.057296ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "95.455466ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-470074 /tmp/TestFunctionalparallelMountCmdany-port180553684/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1674581743573878009" to /tmp/TestFunctionalparallelMountCmdany-port180553684/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674581743573878009" to /tmp/TestFunctionalparallelMountCmdany-port180553684/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674581743573878009" to /tmp/TestFunctionalparallelMountCmdany-port180553684/001/test-1674581743573878009
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (473.533088ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 24 17:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 24 17:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 24 17:35 test-1674581743573878009
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh cat /mount-9p/test-1674581743573878009
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-470074 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9fb2fe50-eaf4-426f-b20f-2618a997d298] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [9fb2fe50-eaf4-426f-b20f-2618a997d298] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [9fb2fe50-eaf4-426f-b20f-2618a997d298] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [9fb2fe50-eaf4-426f-b20f-2618a997d298] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.043644171s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-470074 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh stat /mount-9p/created-by-test
E0124 17:35:57.835459   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470074 /tmp/TestFunctionalparallelMountCmdany-port180553684/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image save gcr.io/google-containers/addon-resizer:functional-470074 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 image save gcr.io/google-containers/addon-resizer:functional-470074 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.135732201s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image rm gcr.io/google-containers/addon-resizer:functional-470074
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.991723598s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-470074
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 image save --daemon gcr.io/google-containers/addon-resizer:functional-470074

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-470074 image save --daemon gcr.io/google-containers/addon-resizer:functional-470074: (3.002414421s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-470074
E0124 17:35:53.032406   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-470074 /tmp/TestFunctionalparallelMountCmdspecific-port2401103540/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (469.87557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470074 /tmp/TestFunctionalparallelMountCmdspecific-port2401103540/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-470074 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470074 ssh "sudo umount -f /mount-9p": exit status 1 (481.320183ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-470074 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470074 /tmp/TestFunctionalparallelMountCmdspecific-port2401103540/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-470074 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-470074 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [192cb6f5-f2d9-422a-8145-218770c950f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [192cb6f5-f2d9-422a-8145-218770c950f9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.042554596s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-470074 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.103.76.87 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-470074 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-470074
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-470074
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-470074
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-107979
image_test.go:73: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-107979: (2.327849151s)
--- PASS: TestImageBuild/serial/NormalBuild (2.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-107979
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-107979: (1.021115986s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-107979
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-107979
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-933654 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0124 17:37:14.638448   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-933654 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m5.021925442s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-933654 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-933654 addons enable ingress --alsologtostderr -v=5: (11.135741959s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-933654 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-933654 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-933654 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.870521416s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-933654 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-933654 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [59383157-4c80-4969-8391-23805f35c597] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0124 17:38:36.558576   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
helpers_test.go:344: "nginx" [59383157-4c80-4969-8391-23805f35c597] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.006124625s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-933654 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-933654 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-933654 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-933654 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-933654 addons disable ingress-dns --alsologtostderr -v=1: (13.130777349s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-933654 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-933654 addons disable ingress --alsologtostderr -v=1: (7.277373791s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-695806 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-695806 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (55.889922318s)
--- PASS: TestJSONOutput/start/Command (55.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-695806 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-695806 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-695806 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-695806 --output=json --user=testUser: (5.866958674s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-101778 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-101778 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.346371ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13783820-4314-4a73-818b-596d598d51c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-101778] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d98f7f62-508f-4306-9540-54bfabf7e879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"732a3db3-42cf-4ba5-bc04-07ab84a4b625","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7da90fa-f021-49e0-a20f-770176f36853","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig"}}
	{"specversion":"1.0","id":"f4aff7f8-a9a3-4c22-91ee-628a9539b079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube"}}
	{"specversion":"1.0","id":"718485e5-b4d3-4143-a45b-647c1df09ea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"42161b74-c960-4eb7-ac1f-4c1f0143bd94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68aac754-f85c-4383-8857-d4b79e4db52c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-101778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-101778
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-483777 --network=
E0124 17:40:28.470081   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:28.475340   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:28.485608   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:28.505905   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:28.546230   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:28.626575   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:28.787003   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:29.107630   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:29.748599   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:31.029694   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:33.590860   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:38.711552   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:40:48.952310   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-483777 --network=: (37.238205577s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-483777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-483777
E0124 17:40:52.714740   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-483777: (2.267820158s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (42.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-353317 --network=bridge
E0124 17:41:09.432574   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:41:20.400007   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-353317 --network=bridge: (40.577194853s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-353317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-353317
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-353317: (2.03811814s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (42.64s)

                                                
                                    
x
+
TestKicExistingNetwork (39.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-815826 --network=existing-network
E0124 17:41:50.394451   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-815826 --network=existing-network: (37.434661934s)
helpers_test.go:175: Cleaning up "existing-network-815826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-815826
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-815826: (2.049158852s)
--- PASS: TestKicExistingNetwork (39.65s)

                                                
                                    
x
+
TestKicCustomSubnet (39.29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-150401 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-150401 --subnet=192.168.60.0/24: (37.061554281s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-150401 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-150401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-150401
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-150401: (2.200689924s)
--- PASS: TestKicCustomSubnet (39.29s)

                                                
                                    
x
+
TestKicStaticIP (42.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-190824 --static-ip=192.168.200.200
E0124 17:43:12.316228   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:43:16.866077   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:16.871319   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:16.881588   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:16.901916   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:16.942224   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:17.022522   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:17.182959   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:17.503521   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:18.144429   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:19.424645   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:21.985703   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:27.106875   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-190824 --static-ip=192.168.200.200: (39.738046967s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-190824 ip
helpers_test.go:175: Cleaning up "static-ip-190824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-190824
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-190824: (2.171278387s)
--- PASS: TestKicStaticIP (42.11s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (79.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-786342 --driver=docker  --container-runtime=docker
E0124 17:43:37.347197   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:43:57.828106   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-786342 --driver=docker  --container-runtime=docker: (36.346579824s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-788978 --driver=docker  --container-runtime=docker
E0124 17:44:38.789065   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-788978 --driver=docker  --container-runtime=docker: (37.032097991s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-786342
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-788978
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-788978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-788978
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-788978: (2.246322227s)
helpers_test.go:175: Cleaning up "first-786342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-786342
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-786342: (2.27952379s)
--- PASS: TestMinikubeProfile (79.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-982224 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-982224 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.695614426s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-982224 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-001927 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-001927 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.181259211s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-001927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-982224 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-982224 --alsologtostderr -v=5: (1.575332606s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-001927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-001927
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-001927: (1.240893635s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-001927
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-001927: (6.420252301s)
--- PASS: TestMountStart/serial/RestartStopped (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-001927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-585561 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0124 17:45:28.470600   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:45:52.714664   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
E0124 17:45:56.157080   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:46:00.709833   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-585561 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m14.895459972s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-585561 -- rollout status deployment/busybox: (6.253827906s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-7rp7j -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-c86kc -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-7rp7j -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-c86kc -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-7rp7j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-c86kc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-7rp7j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-7rp7j -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-c86kc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-585561 -- exec busybox-6b86dd6d48-c86kc -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-585561 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-585561 -v 3 --alsologtostderr: (15.903725668s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.64s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp testdata/cp-test.txt multinode-585561:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2351278162/001/cp-test_multinode-585561.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561:/home/docker/cp-test.txt multinode-585561-m02:/home/docker/cp-test_multinode-585561_multinode-585561-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m02 "sudo cat /home/docker/cp-test_multinode-585561_multinode-585561-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561:/home/docker/cp-test.txt multinode-585561-m03:/home/docker/cp-test_multinode-585561_multinode-585561-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m03 "sudo cat /home/docker/cp-test_multinode-585561_multinode-585561-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp testdata/cp-test.txt multinode-585561-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2351278162/001/cp-test_multinode-585561-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt multinode-585561:/home/docker/cp-test_multinode-585561-m02_multinode-585561.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561 "sudo cat /home/docker/cp-test_multinode-585561-m02_multinode-585561.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt multinode-585561-m03:/home/docker/cp-test_multinode-585561-m02_multinode-585561-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m03 "sudo cat /home/docker/cp-test_multinode-585561-m02_multinode-585561-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp testdata/cp-test.txt multinode-585561-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2351278162/001/cp-test_multinode-585561-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt multinode-585561:/home/docker/cp-test_multinode-585561-m03_multinode-585561.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561 "sudo cat /home/docker/cp-test_multinode-585561-m03_multinode-585561.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt multinode-585561-m02:/home/docker/cp-test_multinode-585561-m03_multinode-585561-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 ssh -n multinode-585561-m02 "sudo cat /home/docker/cp-test_multinode-585561-m03_multinode-585561-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-585561 node stop m03: (1.254464965s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-585561 status: exit status 7 (576.861787ms)

                                                
                                                
-- stdout --
	multinode-585561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-585561-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-585561-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr: exit status 7 (579.432733ms)

                                                
                                                
-- stdout --
	multinode-585561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-585561-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-585561-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 17:47:18.047033  147011 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:47:18.047247  147011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:47:18.047258  147011 out.go:309] Setting ErrFile to fd 2...
	I0124 17:47:18.047265  147011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:47:18.047393  147011 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	I0124 17:47:18.047571  147011 out.go:303] Setting JSON to false
	I0124 17:47:18.047600  147011 mustload.go:65] Loading cluster: multinode-585561
	I0124 17:47:18.047725  147011 notify.go:220] Checking for updates...
	I0124 17:47:18.047966  147011 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:47:18.047985  147011 status.go:255] checking status of multinode-585561 ...
	I0124 17:47:18.048340  147011 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:47:18.079054  147011 status.go:330] multinode-585561 host status = "Running" (err=<nil>)
	I0124 17:47:18.079078  147011 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:47:18.079322  147011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561
	I0124 17:47:18.104264  147011 host.go:66] Checking if "multinode-585561" exists ...
	I0124 17:47:18.104604  147011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 17:47:18.104650  147011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
	I0124 17:47:18.130499  147011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
	I0124 17:47:18.221317  147011 ssh_runner.go:195] Run: systemctl --version
	I0124 17:47:18.224788  147011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 17:47:18.233513  147011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 17:47:18.331596  147011 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2023-01-24 17:47:18.253911387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 17:47:18.332352  147011 kubeconfig.go:92] found "multinode-585561" server: "https://192.168.58.2:8443"
	I0124 17:47:18.332375  147011 api_server.go:165] Checking apiserver status ...
	I0124 17:47:18.332418  147011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 17:47:18.341922  147011 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2612/cgroup
	I0124 17:47:18.349313  147011 api_server.go:181] apiserver freezer: "8:freezer:/docker/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/kubepods/burstable/pode6933d1a0858d027c0aa46d814d0f153/8db5094d208beeca4bf08e9a07a4e4d2b13f798b0b771bbd34ef54b3e078011d"
	I0124 17:47:18.349363  147011 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/kubepods/burstable/pode6933d1a0858d027c0aa46d814d0f153/8db5094d208beeca4bf08e9a07a4e4d2b13f798b0b771bbd34ef54b3e078011d/freezer.state
	I0124 17:47:18.355999  147011 api_server.go:203] freezer state: "THAWED"
	I0124 17:47:18.356025  147011 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0124 17:47:18.359598  147011 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0124 17:47:18.359619  147011 status.go:421] multinode-585561 apiserver status = Running (err=<nil>)
	I0124 17:47:18.359629  147011 status.go:257] multinode-585561 status: &{Name:multinode-585561 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0124 17:47:18.359650  147011 status.go:255] checking status of multinode-585561-m02 ...
	I0124 17:47:18.359946  147011 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
	I0124 17:47:18.384424  147011 status.go:330] multinode-585561-m02 host status = "Running" (err=<nil>)
	I0124 17:47:18.384445  147011 host.go:66] Checking if "multinode-585561-m02" exists ...
	I0124 17:47:18.384773  147011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m02
	I0124 17:47:18.409422  147011 host.go:66] Checking if "multinode-585561-m02" exists ...
	I0124 17:47:18.409647  147011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 17:47:18.409683  147011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
	I0124 17:47:18.433949  147011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
	I0124 17:47:18.524994  147011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 17:47:18.534039  147011 status.go:257] multinode-585561-m02 status: &{Name:multinode-585561-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0124 17:47:18.534072  147011 status.go:255] checking status of multinode-585561-m03 ...
	I0124 17:47:18.534313  147011 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
	I0124 17:47:18.558240  147011 status.go:330] multinode-585561-m03 host status = "Stopped" (err=<nil>)
	I0124 17:47:18.558262  147011 status.go:343] host is not running, skipping remaining checks
	I0124 17:47:18.558269  147011 status.go:257] multinode-585561-m03 status: &{Name:multinode-585561-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-585561
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-585561
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-585561: (22.548441813s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-585561 --wait=true -v=8 --alsologtostderr
E0124 17:50:28.469953   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 17:50:52.715001   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-585561 --wait=true -v=8 --alsologtostderr: (1m10.473317811s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-585561
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-585561 node delete m03: (4.314671629s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-585561 stop: (21.421759922s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-585561 status: exit status 7 (122.369228ms)

                                                
                                                
-- stdout --
	multinode-585561
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-585561-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr: exit status 7 (115.755203ms)

                                                
                                                
-- stdout --
	multinode-585561
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-585561-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 17:51:47.543417  168766 out.go:296] Setting OutFile to fd 1 ...
	I0124 17:51:47.543651  168766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:51:47.543668  168766 out.go:309] Setting ErrFile to fd 2...
	I0124 17:51:47.543675  168766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 17:51:47.543875  168766 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
	I0124 17:51:47.544050  168766 out.go:303] Setting JSON to false
	I0124 17:51:47.544075  168766 mustload.go:65] Loading cluster: multinode-585561
	I0124 17:51:47.544122  168766 notify.go:220] Checking for updates...
	I0124 17:51:47.544592  168766 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 17:51:47.544612  168766 status.go:255] checking status of multinode-585561 ...
	I0124 17:51:47.545114  168766 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
	I0124 17:51:47.567412  168766 status.go:330] multinode-585561 host status = "Stopped" (err=<nil>)
	I0124 17:51:47.567439  168766 status.go:343] host is not running, skipping remaining checks
	I0124 17:51:47.567444  168766 status.go:257] multinode-585561 status: &{Name:multinode-585561 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0124 17:51:47.567470  168766 status.go:255] checking status of multinode-585561-m02 ...
	I0124 17:51:47.567668  168766 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
	I0124 17:51:47.590233  168766 status.go:330] multinode-585561-m02 host status = "Stopped" (err=<nil>)
	I0124 17:51:47.590263  168766 status.go:343] host is not running, skipping remaining checks
	I0124 17:51:47.590270  168766 status.go:257] multinode-585561-m02 status: &{Name:multinode-585561-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-585561 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0124 17:52:15.760649   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-585561 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.870242587s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-585561 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-585561
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-585561-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-585561-m02 --driver=docker  --container-runtime=docker: exit status 14 (93.479093ms)

                                                
                                                
-- stdout --
	* [multinode-585561-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-585561-m02' is duplicated with machine name 'multinode-585561-m02' in profile 'multinode-585561'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-585561-m03 --driver=docker  --container-runtime=docker
E0124 17:53:16.866147   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-585561-m03 --driver=docker  --container-runtime=docker: (36.549750965s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-585561
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-585561: exit status 80 (382.767953ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-585561
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-585561-m03 already exists in multinode-585561-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-585561-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-585561-m03: (2.317351177s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.42s)

                                                
                                    
x
+
TestPreload (126.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-774564 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-774564 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (56.103205422s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-774564 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-774564 -- docker pull gcr.io/k8s-minikube/busybox: (1.40497309s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-774564
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-774564: (10.719907428s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-774564 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0124 17:55:28.469748   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-774564 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (55.982213844s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-774564 -- docker images
helpers_test.go:175: Cleaning up "test-preload-774564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-774564
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-774564: (2.257546354s)
--- PASS: TestPreload (126.83s)

                                                
                                    
x
+
TestScheduledStopUnix (111.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-881246 --memory=2048 --driver=docker  --container-runtime=docker
E0124 17:55:52.714994   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-881246 --memory=2048 --driver=docker  --container-runtime=docker: (37.879011247s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881246 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-881246 -n scheduled-stop-881246
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881246 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881246 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-881246 -n scheduled-stop-881246
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-881246
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881246 --schedule 15s
E0124 17:56:51.518027   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-881246
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-881246: exit status 7 (96.115221ms)

                                                
                                                
-- stdout --
	scheduled-stop-881246
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-881246 -n scheduled-stop-881246
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-881246 -n scheduled-stop-881246: exit status 7 (92.377358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-881246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-881246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-881246: (1.776988287s)
--- PASS: TestScheduledStopUnix (111.51s)

                                                
                                    
x
+
TestSkaffold (71.97s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1467636612 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-788773 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-788773 --memory=2600 --driver=docker  --container-runtime=docker: (38.574994621s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1467636612 run --minikube-profile skaffold-788773 --kube-context skaffold-788773 --status-check=true --port-forward=false --interactive=false
E0124 17:58:16.866037   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1467636612 run --minikube-profile skaffold-788773 --kube-context skaffold-788773 --status-check=true --port-forward=false --interactive=false: (18.504205591s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-759645fb7c-l7wpx" [9ada5757-fdec-485b-95b3-ecbb38e51069] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012468708s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-844574bbfc-sllc7" [d5ff523f-2566-4d2b-a7ff-e20da3cd7a74] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005849745s
helpers_test.go:175: Cleaning up "skaffold-788773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-788773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-788773: (2.455586156s)
--- PASS: TestSkaffold (71.97s)

                                                
                                    
x
+
TestInsufficientStorage (11.82s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-808290 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-808290 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.383241497s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb464ea4-d6ba-42a3-bc29-2d706d86f0db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-808290] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b048284-ddcd-4508-bdda-61a4a514bf48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"f29aa98b-5390-4bee-9889-39cc251578aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a8026f2c-fe8e-45df-aeeb-24ad84081245","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig"}}
	{"specversion":"1.0","id":"94b6715b-dec1-4115-868e-bb3a46df0b6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube"}}
	{"specversion":"1.0","id":"fe6b9476-4b0f-48e7-b633-3e7bec941e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fd83e8c2-4711-4aa5-9784-07032136a4c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"250769cd-8229-470a-82cd-85de5a61070e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"389808f8-822d-49a9-a72f-f3914b090837","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9fbe49a1-9a1f-4101-9dc4-86966eaf6b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c58200ed-0933-4bf7-ba3c-5f4801c128b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fc55d6c2-a72e-40b9-9bcc-c5288a22dd43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-808290 in cluster insufficient-storage-808290","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c22a19b3-fa55-4c65-9201-37c05c2dde4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0588f77-606a-4f06-808d-6b125e21fd04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"972e75da-9385-47d9-a0a5-7636893aab66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-808290 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-808290 --output=json --layout=cluster: exit status 7 (340.820863ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-808290","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-808290","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 17:58:51.841126  209488 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-808290" does not appear in /home/jenkins/minikube-integration/15565-3637/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-808290 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-808290 --output=json --layout=cluster: exit status 7 (344.33366ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-808290","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-808290","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 17:58:52.185572  209599 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-808290" does not appear in /home/jenkins/minikube-integration/15565-3637/kubeconfig
	E0124 17:58:52.193829  209599 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/insufficient-storage-808290/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-808290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-808290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-808290: (1.754087245s)
--- PASS: TestInsufficientStorage (11.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.304713245.exe start -p running-upgrade-194836 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.304713245.exe start -p running-upgrade-194836 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m10.317861733s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-194836 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-194836 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.481024462s)
helpers_test.go:175: Cleaning up "running-upgrade-194836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-194836
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-194836: (2.289386227s)
--- PASS: TestRunningBinaryUpgrade (100.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (415.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m7.962303109s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-295340
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-295340: (12.083198297s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-295340 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-295340 status --format={{.Host}}: exit status 7 (136.414135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m50.872033124s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-295340 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (456.188624ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-295340] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-295340
	    minikube start -p kubernetes-upgrade-295340 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2953402 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-295340 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-295340 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.204709347s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-295340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-295340
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-295340: (2.79274977s)
--- PASS: TestKubernetesUpgrade (415.62s)

                                                
                                    
x
+
TestMissingContainerUpgrade (112.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.4139014317.exe start -p missing-upgrade-289508 --memory=2200 --driver=docker  --container-runtime=docker
E0124 17:59:39.910707   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.4139014317.exe start -p missing-upgrade-289508 --memory=2200 --driver=docker  --container-runtime=docker: (1m8.474744642s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-289508
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-289508: (2.243362062s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-289508
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-289508 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-289508 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.077038636s)
helpers_test.go:175: Cleaning up "missing-upgrade-289508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-289508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-289508: (2.286901793s)
--- PASS: TestMissingContainerUpgrade (112.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251255 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-251255 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (118.084085ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-251255] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (61.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251255 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251255 --driver=docker  --container-runtime=docker: (1m0.874504407s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-251255 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (61.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251255 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251255 --no-kubernetes --driver=docker  --container-runtime=docker: (14.59017127s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-251255 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-251255 status -o json: exit status 2 (384.972073ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-251255","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-251255
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-251255: (2.204107533s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251255 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251255 --no-kubernetes --driver=docker  --container-runtime=docker: (10.605993641s)
--- PASS: TestNoKubernetes/serial/Start (10.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-251255 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-251255 "sudo systemctl is-active --quiet service kubelet": exit status 1 (493.283824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-251255
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-251255: (1.29451555s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251255 --driver=docker  --container-runtime=docker
E0124 18:00:28.469691   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251255 --driver=docker  --container-runtime=docker: (9.737092736s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-251255 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-251255 "sudo systemctl is-active --quiet service kubelet": exit status 1 (427.838329ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.1291735358.exe start -p stopped-upgrade-578376 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.1291735358.exe start -p stopped-upgrade-578376 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m13.62765552s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.1291735358.exe -p stopped-upgrade-578376 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.1291735358.exe -p stopped-upgrade-578376 stop: (12.954591382s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-578376 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-578376 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.146229075s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.73s)

                                                
                                    
x
+
TestPause/serial/Start (57.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-978016 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-978016 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (57.403642678s)
--- PASS: TestPause/serial/Start (57.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-578376
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-578376: (2.899368924s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-978016 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0124 18:03:29.643338   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:29.648612   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:29.658890   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:29.679519   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:29.720216   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:29.800326   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:29.960596   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:30.280912   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:30.922039   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:32.202984   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:34.763538   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:39.883892   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:03:50.124434   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-978016 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.281736918s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m45.341362657s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-978016 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-978016 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-978016 --output=json --layout=cluster: exit status 2 (452.68021ms)

                                                
                                                
-- stdout --
	{"Name":"pause-978016","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-978016","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-978016 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-978016 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.36s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-978016 --alsologtostderr -v=5
E0124 18:04:10.604882   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-978016 --alsologtostderr -v=5: (2.360094524s)
--- PASS: TestPause/serial/DeletePaused (2.36s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.8s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-978016
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-978016: exit status 1 (31.325329ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-978016

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m5.831205032s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fgfsq" [7bfa32a2-15c5-42c0-acd3-d6edef00b3ac] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014965508s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m23.726859169s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8p6lz" [38e1d572-9a0f-41b0-9801-45956dc3be55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0124 18:05:28.470446   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-8p6lz" [38e1d572-9a0f-41b0-9801-45956dc3be55] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.009251143s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-r87dn" [9ca8787c-5400-4518-98da-40e51a68b35d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-r87dn" [9ca8787c-5400-4518-98da-40e51a68b35d] Running
E0124 18:05:52.714198   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006222842s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0124 18:06:13.486334   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m14.280530453s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (55.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (55.983035663s)
--- PASS: TestNetworkPlugins/group/false/Start (55.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9k599" [aa2a8f9a-d9ed-45ae-a4b9-b65f647261d1] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016456573s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-74c8q" [972b8a0d-daa6-4d50-939b-1564285daf5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-74c8q" [972b8a0d-daa6-4d50-939b-1564285daf5e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00748491s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m7.514589361s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-m92pg" [1e136980-480f-4bfe-9e61-3f3c76d05407] Pending
helpers_test.go:344: "netcat-694fc96674-m92pg" [1e136980-480f-4bfe-9e61-3f3c76d05407] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-m92pg" [1e136980-480f-4bfe-9e61-3f3c76d05407] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.008005826s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-t2vjq" [bac34d99-56c8-4d4c-8ce3-c18e2a97ef05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:344: "netcat-694fc96674-t2vjq" [bac34d99-56c8-4d4c-8ce3-c18e2a97ef05] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.008289951s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (95.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m35.489329507s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (95.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m11.578427216s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-647540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (54.946329946s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (17.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-cq22j" [f9ee4458-956c-4b36-921f-37a75b670de1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0124 18:08:16.865462   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-cq22j" [f9ee4458-956c-4b36-921f-37a75b670de1] Running
E0124 18:08:29.643716   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 17.014266352s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (17.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (113.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-091989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-091989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (1m53.908059968s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (113.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4wvm7" [123386e9-3f04-4d6a-97ac-5a6086bce369] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:344: "netcat-694fc96674-4wvm7" [123386e9-3f04-4d6a-97ac-5a6086bce369] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.00863029s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rjpfd" [be4b05ce-cb07-464c-9e4c-cb2eba63137f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.01477116s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-k6c4b" [064c1cf1-e3f7-4329-87fc-3dd8db2f626f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-k6c4b" [064c1cf1-e3f7-4329-87fc-3dd8db2f626f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.008004793s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-647540 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-647540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-mj5z9" [f00af6e7-bda2-4725-b223-8fe8a492e879] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:344: "netcat-694fc96674-mj5z9" [f00af6e7-bda2-4725-b223-8fe8a492e879] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:344: "netcat-694fc96674-mj5z9" [f00af6e7-bda2-4725-b223-8fe8a492e879] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00664741s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-647540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-647540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)
E0124 18:18:44.418750   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:18:59.454578   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:19:04.607295   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:19:06.301298   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:19:27.138463   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:19:32.291787   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:19:33.986913   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:19:52.688094   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-493628 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-493628 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (58.738242687s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-767237 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-767237 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (58.797344405s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-904234 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0124 18:10:19.259534   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:19.264866   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:19.275128   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:19.296017   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:19.336306   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:19.416530   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:19.576817   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:19.897313   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:20.538456   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:21.819070   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:24.379372   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:10:28.469770   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 18:10:29.500084   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-904234 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (1m0.280267008s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-493628 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2a566958-51d7-4631-8730-be99e7c64c54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2a566958-51d7-4631-8730-be99e7c64c54] Running
E0124 18:10:39.740768   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.013184226s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-493628 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767237 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [66aa2f1f-8074-459c-8f4d-df803454ca74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [66aa2f1f-8074-459c-8f4d-df803454ca74] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.012456399s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767237 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-493628 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-493628 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-493628 --alsologtostderr -v=3
E0124 18:10:47.114297   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:47.119532   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:47.129803   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:47.150090   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:47.190356   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:47.270723   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:47.431382   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:47.751945   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:48.393125   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-493628 --alsologtostderr -v=3: (11.058589222s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-904234 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [35dc70cf-7ef5-4fde-8a7c-f38e7f1bf894] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0124 18:10:49.673628   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
helpers_test.go:344: "busybox" [35dc70cf-7ef5-4fde-8a7c-f38e7f1bf894] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.011986655s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-904234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-767237 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-767237 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-767237 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-767237 --alsologtostderr -v=3: (11.09448326s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-091989 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [63819649-fc38-4fa2-b49a-3a6ecc39c51c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0124 18:10:52.233754   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:10:52.714470   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
helpers_test.go:344: "busybox" [63819649-fc38-4fa2-b49a-3a6ecc39c51c] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.012914022s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-091989 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-493628 -n no-preload-493628
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-493628 -n no-preload-493628: exit status 7 (133.150035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-493628 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (561.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-493628 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-493628 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m20.658373944s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-493628 -n no-preload-493628
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (561.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-904234 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0124 18:10:57.354504   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-904234 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-904234 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-904234 --alsologtostderr -v=3: (10.826238579s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-091989 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0124 18:11:00.220956   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-091989 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-091989 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-091989 --alsologtostderr -v=3: (10.924243627s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767237 -n embed-certs-767237
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767237 -n embed-certs-767237: exit status 7 (135.750998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-767237 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (559.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-767237 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0124 18:11:07.595644   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-767237 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m18.892561081s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-767237 -n embed-certs-767237
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (559.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234: exit status 7 (124.241983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-904234 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (562.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-904234 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-904234 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m22.234510692s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (562.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-091989 -n old-k8s-version-091989
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-091989 -n old-k8s-version-091989: exit status 7 (95.982059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-091989 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (339.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-091989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0124 18:11:28.076721   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:11:41.181724   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:11:45.262133   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:45.267405   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:45.277696   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:45.298021   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:45.338327   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:45.418629   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:45.579049   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:45.899304   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:46.539978   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:47.820739   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:50.381354   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:11:55.502039   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:12:05.742338   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:12:09.037270   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:12:16.287593   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:16.292881   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:16.303216   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:16.323513   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:16.363819   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:16.444173   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:16.604622   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:16.925102   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:17.565688   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:18.846835   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:21.407153   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:22.280916   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:22.286169   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:22.296408   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:22.316676   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:22.356960   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:22.437245   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:22.597670   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:22.918561   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:23.559665   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:24.840616   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:26.222747   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:12:26.527659   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:27.400925   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:32.521722   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:36.768434   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:12:42.762056   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:12:57.249026   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:13:03.102814   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:13:03.243115   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:13:07.183815   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:13:16.735226   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:16.740563   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:16.750823   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:16.771116   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:16.811435   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:16.865641   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 18:13:16.891931   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:17.052124   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:17.372418   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:18.012666   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:19.293574   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:21.854341   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:26.974904   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:29.643363   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
E0124 18:13:30.957916   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:13:31.519013   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 18:13:37.215925   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:38.210023   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:13:44.204006   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:13:57.696606   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:13:59.454068   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:13:59.459388   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:13:59.469651   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:13:59.489909   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:13:59.530194   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:13:59.610524   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:13:59.771214   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:00.091813   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:00.732854   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:02.014024   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:04.574346   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:04.607105   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:04.612374   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:04.622647   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:04.642910   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:04.683212   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:04.763533   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:04.923937   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:05.244443   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:05.885537   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:06.301811   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:06.307064   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:06.317358   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:06.337702   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:06.377992   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:06.458361   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:06.618772   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:06.939546   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:07.166343   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:07.579771   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:08.860937   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:09.695044   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:09.727279   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:11.421751   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:14.848407   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:16.542386   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:19.935271   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:25.089175   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:26.783362   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:14:29.104625   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:14:38.657433   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:14:40.416373   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:14:45.570277   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:14:47.264055   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:15:00.130240   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:15:06.124162   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:15:19.260168   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:15:21.377543   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:15:26.530796   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:15:28.224733   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
E0124 18:15:28.470315   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
E0124 18:15:46.943582   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory
E0124 18:15:47.113755   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:15:52.714263   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/addons-573842/client.crt: no such file or directory
E0124 18:16:00.578300   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:16:14.798825   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/auto-647540/client.crt: no such file or directory
E0124 18:16:19.911737   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 18:16:43.297760   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/enable-default-cni-647540/client.crt: no such file or directory
E0124 18:16:45.262375   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:16:48.451197   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/flannel-647540/client.crt: no such file or directory
E0124 18:16:50.145748   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kubenet-647540/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-091989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m38.646951949s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-091989 -n old-k8s-version-091989
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (339.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6p7pm" [518bb9fe-7d24-4695-b1e9-908a2d4a35aa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012252563s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6p7pm" [518bb9fe-7d24-4695-b1e9-908a2d4a35aa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006873253s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-091989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-091989 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-091989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-091989 -n old-k8s-version-091989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-091989 -n old-k8s-version-091989: exit status 2 (398.091775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-091989 -n old-k8s-version-091989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-091989 -n old-k8s-version-091989: exit status 2 (411.066522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-091989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-091989 -n old-k8s-version-091989
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-091989 -n old-k8s-version-091989
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-789329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0124 18:17:12.945480   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/calico-647540/client.crt: no such file or directory
E0124 18:17:16.287757   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:17:22.280919   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
E0124 18:17:43.970622   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/custom-flannel-647540/client.crt: no such file or directory
E0124 18:17:49.965263   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/false-647540/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-789329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (50.97957412s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-789329 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-789329 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-789329 --alsologtostderr -v=3: (10.629616044s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-789329 -n newest-cni-789329
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-789329 -n newest-cni-789329: exit status 7 (101.61329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-789329 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-789329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0124 18:18:16.735471   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/bridge-647540/client.crt: no such file or directory
E0124 18:18:16.865657   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 18:18:29.642994   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/skaffold-788773/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-789329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (27.552256044s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-789329 -n newest-cni-789329
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-789329 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-789329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-789329 -n newest-cni-789329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-789329 -n newest-cni-789329: exit status 2 (388.950739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-789329 -n newest-cni-789329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-789329 -n newest-cni-789329: exit status 2 (397.700401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-789329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-789329 -n newest-cni-789329
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-789329 -n newest-cni-789329
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-cfwt5" [0c711eaa-6b5b-4728-9067-6cbf15310922] Running
E0124 18:20:19.259843   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/kindnet-647540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013624585s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-p74m6" [b37fea61-807b-4f6f-86f0-7bb95e1ab24a] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011722796s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-cfwt5" [0c711eaa-6b5b-4728-9067-6cbf15310922] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007951508s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-493628 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-p74m6" [b37fea61-807b-4f6f-86f0-7bb95e1ab24a] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007209453s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-767237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-493628 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-493628 --alsologtostderr -v=1
E0124 18:20:28.469807   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/functional-470074/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-493628 -n no-preload-493628
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-493628 -n no-preload-493628: exit status 2 (386.214165ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-493628 -n no-preload-493628
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-493628 -n no-preload-493628: exit status 2 (394.468995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-493628 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-493628 -n no-preload-493628
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-493628 -n no-preload-493628
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-j8k59" [f540ad29-f60b-40c9-b895-e85066e8347c] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013306833s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-767237 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-767237 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767237 -n embed-certs-767237
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767237 -n embed-certs-767237: exit status 2 (438.89778ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-767237 -n embed-certs-767237
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-767237 -n embed-certs-767237: exit status 2 (534.194263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-767237 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-767237 -n embed-certs-767237
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-767237 -n embed-certs-767237
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-j8k59" [f540ad29-f60b-40c9-b895-e85066e8347c] Running
E0124 18:20:36.635543   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:36.641092   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:36.651371   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:36.671694   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:36.712010   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:36.792423   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:36.953332   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:37.273883   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
E0124 18:20:37.914935   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006461856s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-904234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-904234 "sudo crictl images -o json"
E0124 18:20:41.756572   10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/no-preload-493628/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-904234 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234: exit status 2 (364.920456ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234: exit status 2 (365.947934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-904234 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-904234 -n default-k8s-diff-port-904234
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                    

Test skip (19/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-647540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-647540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-647540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-647540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647540"

                                                
                                                
----------------------- debugLogs end: cilium-647540 [took: 5.121223487s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-647540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-647540
--- SKIP: TestNetworkPlugins/group/cilium (5.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-394957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-394957
--- SKIP: TestStartStop/group/disable-driver-mounts (0.32s)

                                                
                                    
Copied to clipboard