Test Report: Docker_Linux_containerd master

                    
                      c3c4d0455dfed89650fdf54f9f70d551912b4969:2021-08-14:20006
                    
                

Test fail (13/264)

x
+
TestScheduledStopUnix (88.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210814093051-6746 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210814093051-6746 --memory=2048 --driver=docker  --container-runtime=containerd: (42.768439541s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210814093051-6746 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210814093051-6746 -n scheduled-stop-20210814093051-6746
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210814093051-6746 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210814093051-6746 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210814093051-6746 -n scheduled-stop-20210814093051-6746
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210814093051-6746
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210814093051-6746 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210814093051-6746
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210814093051-6746: exit status 3 (2.01825569s)

                                                
                                                
-- stdout --
	scheduled-stop-20210814093051-6746
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:32:14.082973  125604 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0814 09:32:14.083032  125604 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210814093051-6746
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:32:14.082973  125604 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0814 09:32:14.083032  125604 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-14 09:32:14.084962883 +0000 UTC m=+1656.032875835
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210814093051-6746
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210814093051-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04b96b7c3b6ed959035eb33122f22669e1251218a5adb9a312aafebe9b483b24",
	        "Created": "2021-08-14T09:30:52.572712965Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:30:53.02852588Z",
	            "FinishedAt": "2021-08-14T09:32:12.149257581Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/04b96b7c3b6ed959035eb33122f22669e1251218a5adb9a312aafebe9b483b24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04b96b7c3b6ed959035eb33122f22669e1251218a5adb9a312aafebe9b483b24/hostname",
	        "HostsPath": "/var/lib/docker/containers/04b96b7c3b6ed959035eb33122f22669e1251218a5adb9a312aafebe9b483b24/hosts",
	        "LogPath": "/var/lib/docker/containers/04b96b7c3b6ed959035eb33122f22669e1251218a5adb9a312aafebe9b483b24/04b96b7c3b6ed959035eb33122f22669e1251218a5adb9a312aafebe9b483b24-json.log",
	        "Name": "/scheduled-stop-20210814093051-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210814093051-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210814093051-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/004ad7c60a713adb6ad4a83789a065dc24f161eeb3f395cad9ec8580d862f8a5-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/004ad7c60a713adb6ad4a83789a065dc24f161eeb3f395cad9ec8580d862f8a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/004ad7c60a713adb6ad4a83789a065dc24f161eeb3f395cad9ec8580d862f8a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/004ad7c60a713adb6ad4a83789a065dc24f161eeb3f395cad9ec8580d862f8a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210814093051-6746",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210814093051-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210814093051-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210814093051-6746",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210814093051-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5eb25a6b9d81f66310c178bd62b58e26e526804531f60dcb0060752063cd4774",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/5eb25a6b9d81",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210814093051-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "04b96b7c3b6e"
	                    ],
	                    "NetworkID": "72e974925864b0c99c960fbe345b58f419e3894458206a15b542de6dc19f3007",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210814093051-6746 -n scheduled-stop-20210814093051-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210814093051-6746 -n scheduled-stop-20210814093051-6746: exit status 7 (88.04889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210814093051-6746" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210814093051-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210814093051-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210814093051-6746: (5.355812061s)
--- FAIL: TestScheduledStopUnix (88.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (2039.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.294718574.exe start -p running-upgrade-20210814093236-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0814 09:32:38.024707    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:32:50.188531    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.294718574.exe start -p running-upgrade-20210814093236-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 109 (9m6.920740605s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210814093236-6746] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	  - KUBECONFIG=/tmp/legacy_kubeconfig578709749
	* Using the docker driver based on user configuration
	* Starting control plane node running-upgrade-20210814093236-6746 in cluster running-upgrade-20210814093236-6746
	* Downloading Kubernetes v1.20.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	  - Booting up control plane .../ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	  - Generating certificates and keys .../ WW- WW\ WW| WW/ WW- WW
	  - Booting up control plane ...\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 136.12 MiB / 902.99 MiB [] 15.07% ? p/s ?    > preloaded-images-k8s-v8-v1....: 160.00 MiB / 902.99 MiB [] 17.72% ? p/s ?    > preloaded-images-k8s-v8-v1....: 182.76 MiB / 902.99 MiB [] 20.24% ? p/s ?    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 902.99 MiB  21.26% 93.13 MiB    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 902.99 MiB  21.26% 93.13 MiB    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 902.99 MiB  21.26% 93.13 MiB    > preloaded-images-k8s-v8-v1....: 193.59 MiB / 902.99 MiB  21.44% 87.30 MiB    > preloaded-images-k8s-v8-v1....: 220.82 MiB / 902.99 MiB  24.45% 87.30 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 87.30 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 85.79 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 85.79 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 85.79 MiB    > preloaded-images-k8s-v8-v1....: 251.42 MiB / 902.99 MiB  27.8
4% 82.35 MiB    > preloaded-images-k8s-v8-v1....: 260.00 MiB / 902.99 MiB  28.79% 82.35 MiB    > preloaded-images-k8s-v8-v1....: 270.88 MiB / 902.99 MiB  30.00% 82.35 MiB    > preloaded-images-k8s-v8-v1....: 284.00 MiB / 902.99 MiB  31.45% 80.54 MiB    > preloaded-images-k8s-v8-v1....: 292.04 MiB / 902.99 MiB  32.34% 80.54 MiB    > preloaded-images-k8s-v8-v1....: 296.00 MiB / 902.99 MiB  32.78% 80.54 MiB    > preloaded-images-k8s-v8-v1....: 304.91 MiB / 902.99 MiB  33.77% 77.59 MiB    > preloaded-images-k8s-v8-v1....: 320.44 MiB / 902.99 MiB  35.49% 77.59 MiB    > preloaded-images-k8s-v8-v1....: 331.38 MiB / 902.99 MiB  36.70% 77.59 MiB    > preloaded-images-k8s-v8-v1....: 344.00 MiB / 902.99 MiB  38.10% 76.79 MiB    > preloaded-images-k8s-v8-v1....: 344.00 MiB / 902.99 MiB  38.10% 76.79 MiB    > preloaded-images-k8s-v8-v1....: 344.00 MiB / 902.99 MiB  38.10% 76.79 MiB    > preloaded-images-k8s-v8-v1....: 344.00 MiB / 902.99 MiB  38.10% 71.83 MiB    > preloaded-images-k8s-v8-v1....: 368.00 MiB / 902.99 MiB  4
0.75% 71.83 MiB    > preloaded-images-k8s-v8-v1....: 376.00 MiB / 902.99 MiB  41.64% 71.83 MiB    > preloaded-images-k8s-v8-v1....: 376.01 MiB / 902.99 MiB  41.64% 70.64 MiB    > preloaded-images-k8s-v8-v1....: 376.01 MiB / 902.99 MiB  41.64% 70.64 MiB    > preloaded-images-k8s-v8-v1....: 376.01 MiB / 902.99 MiB  41.64% 70.64 MiB    > preloaded-images-k8s-v8-v1....: 376.01 MiB / 902.99 MiB  41.64% 66.08 MiB    > preloaded-images-k8s-v8-v1....: 376.01 MiB / 902.99 MiB  41.64% 66.08 MiB    > preloaded-images-k8s-v8-v1....: 408.00 MiB / 902.99 MiB  45.18% 66.08 MiB    > preloaded-images-k8s-v8-v1....: 408.00 MiB / 902.99 MiB  45.18% 65.26 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 65.26 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 65.26 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 61.91 MiB    > preloaded-images-k8s-v8-v1....: 416.01 MiB / 902.99 MiB  46.07% 61.91 MiB    > preloaded-images-k8s-v8-v1....: 416.01 MiB / 902.99 MiB
46.07% 61.91 MiB    > preloaded-images-k8s-v8-v1....: 443.00 MiB / 902.99 MiB  49.06% 60.82 MiB    > preloaded-images-k8s-v8-v1....: 456.00 MiB / 902.99 MiB  50.50% 60.82 MiB    > preloaded-images-k8s-v8-v1....: 456.00 MiB / 902.99 MiB  50.50% 60.82 MiB    > preloaded-images-k8s-v8-v1....: 471.09 MiB / 902.99 MiB  52.17% 59.92 MiB    > preloaded-images-k8s-v8-v1....: 480.00 MiB / 902.99 MiB  53.16% 59.92 MiB    > preloaded-images-k8s-v8-v1....: 492.04 MiB / 902.99 MiB  54.49% 59.92 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 902.99 MiB  55.81% 59.59 MiB    > preloaded-images-k8s-v8-v1....: 512.00 MiB / 902.99 MiB  56.70% 59.59 MiB    > preloaded-images-k8s-v8-v1....: 536.00 MiB / 902.99 MiB  59.36% 59.59 MiB    > preloaded-images-k8s-v8-v1....: 544.05 MiB / 902.99 MiB  60.25% 60.05 MiB    > preloaded-images-k8s-v8-v1....: 552.00 MiB / 902.99 MiB  61.13% 60.05 MiB    > preloaded-images-k8s-v8-v1....: 576.00 MiB / 902.99 MiB  63.79% 60.05 MiB    > preloaded-images-k8s-v8-v1....: 584.00 MiB / 902.99
MiB  64.67% 60.47 MiB    > preloaded-images-k8s-v8-v1....: 602.16 MiB / 902.99 MiB  66.68% 60.47 MiB    > preloaded-images-k8s-v8-v1....: 616.01 MiB / 902.99 MiB  68.22% 60.47 MiB    > preloaded-images-k8s-v8-v1....: 616.01 MiB / 902.99 MiB  68.22% 60.01 MiB    > preloaded-images-k8s-v8-v1....: 616.01 MiB / 902.99 MiB  68.22% 60.01 MiB    > preloaded-images-k8s-v8-v1....: 616.01 MiB / 902.99 MiB  68.22% 60.01 MiB    > preloaded-images-k8s-v8-v1....: 616.01 MiB / 902.99 MiB  68.22% 56.14 MiB    > preloaded-images-k8s-v8-v1....: 633.35 MiB / 902.99 MiB  70.14% 56.14 MiB    > preloaded-images-k8s-v8-v1....: 656.00 MiB / 902.99 MiB  72.65% 56.14 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 58.54 MiB    > preloaded-images-k8s-v8-v1....: 680.00 MiB / 902.99 MiB  75.31% 58.54 MiB    > preloaded-images-k8s-v8-v1....: 688.00 MiB / 902.99 MiB  76.19% 58.54 MiB    > preloaded-images-k8s-v8-v1....: 696.00 MiB / 902.99 MiB  77.08% 57.34 MiB    > preloaded-images-k8s-v8-v1....: 696.00 MiB / 902.
99 MiB  77.08% 57.34 MiB    > preloaded-images-k8s-v8-v1....: 699.76 MiB / 902.99 MiB  77.49% 57.34 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 55.36 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 55.36 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 55.36 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 51.79 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 51.79 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 51.79 MiB    > preloaded-images-k8s-v8-v1....: 718.54 MiB / 902.99 MiB  79.57% 49.15 MiB    > preloaded-images-k8s-v8-v1....: 718.54 MiB / 902.99 MiB  79.57% 49.15 MiB    > preloaded-images-k8s-v8-v1....: 718.54 MiB / 902.99 MiB  79.57% 49.15 MiB    > preloaded-images-k8s-v8-v1....: 718.54 MiB / 902.99 MiB  79.57% 45.98 MiB    > preloaded-images-k8s-v8-v1....: 718.55 MiB / 902.99 MiB  79.58% 45.98 MiB    > preloaded-images-k8s-v8-v1....: 718.55 MiB / 9
02.99 MiB  79.58% 45.98 MiB    > preloaded-images-k8s-v8-v1....: 720.05 MiB / 902.99 MiB  79.74% 43.18 MiB    > preloaded-images-k8s-v8-v1....: 725.40 MiB / 902.99 MiB  80.33% 43.18 MiB    > preloaded-images-k8s-v8-v1....: 725.40 MiB / 902.99 MiB  80.33% 43.18 MiB    > preloaded-images-k8s-v8-v1....: 731.12 MiB / 902.99 MiB  80.97% 41.58 MiB    > preloaded-images-k8s-v8-v1....: 731.12 MiB / 902.99 MiB  80.97% 41.58 MiB    > preloaded-images-k8s-v8-v1....: 731.12 MiB / 902.99 MiB  80.97% 41.58 MiB    > preloaded-images-k8s-v8-v1....: 739.24 MiB / 902.99 MiB  81.87% 39.77 MiB    > preloaded-images-k8s-v8-v1....: 739.24 MiB / 902.99 MiB  81.87% 39.77 MiB    > preloaded-images-k8s-v8-v1....: 739.24 MiB / 902.99 MiB  81.87% 39.77 MiB    > preloaded-images-k8s-v8-v1....: 739.24 MiB / 902.99 MiB  81.87% 37.21 MiB    > preloaded-images-k8s-v8-v1....: 746.88 MiB / 902.99 MiB  82.71% 37.21 MiB    > preloaded-images-k8s-v8-v1....: 746.88 MiB / 902.99 MiB  82.71% 37.21 MiB    > preloaded-images-k8s-v8-v1....: 746.88 MiB
/ 902.99 MiB  82.71% 35.63 MiB    > preloaded-images-k8s-v8-v1....: 746.88 MiB / 902.99 MiB  82.71% 35.63 MiB    > preloaded-images-k8s-v8-v1....: 764.53 MiB / 902.99 MiB  84.67% 35.63 MiB    > preloaded-images-k8s-v8-v1....: 765.96 MiB / 902.99 MiB  84.83% 35.38 MiB    > preloaded-images-k8s-v8-v1....: 769.38 MiB / 902.99 MiB  85.20% 35.38 MiB    > preloaded-images-k8s-v8-v1....: 771.41 MiB / 902.99 MiB  85.43% 35.38 MiB    > preloaded-images-k8s-v8-v1....: 778.98 MiB / 902.99 MiB  86.27% 34.50 MiB    > preloaded-images-k8s-v8-v1....: 778.98 MiB / 902.99 MiB  86.27% 34.50 MiB    > preloaded-images-k8s-v8-v1....: 785.30 MiB / 902.99 MiB  86.97% 34.50 MiB    > preloaded-images-k8s-v8-v1....: 785.30 MiB / 902.99 MiB  86.97% 32.95 MiB    > preloaded-images-k8s-v8-v1....: 785.30 MiB / 902.99 MiB  86.97% 32.95 MiB    > preloaded-images-k8s-v8-v1....: 785.30 MiB / 902.99 MiB  86.97% 32.95 MiB    > preloaded-images-k8s-v8-v1....: 787.30 MiB / 902.99 MiB  87.19% 31.04 MiB    > preloaded-images-k8s-v8-v1....: 794.23 M
iB / 902.99 MiB  87.96% 31.04 MiB    > preloaded-images-k8s-v8-v1....: 800.83 MiB / 902.99 MiB  88.69% 31.04 MiB    > preloaded-images-k8s-v8-v1....: 800.83 MiB / 902.99 MiB  88.69% 30.49 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 30.49 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 30.49 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 29.30 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 29.30 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 29.30 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 28.27 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 28.27 MiB    > preloaded-images-k8s-v8-v1....: 824.00 MiB / 902.99 MiB  91.25% 28.27 MiB    > preloaded-images-k8s-v8-v1....: 824.00 MiB / 902.99 MiB  91.25% 27.30 MiB    > preloaded-images-k8s-v8-v1....: 824.09 MiB / 902.99 MiB  91.26% 27.30 MiB    > preloaded-images-k8s-v8-v1....: 832.0
0 MiB / 902.99 MiB  92.14% 27.30 MiB    > preloaded-images-k8s-v8-v1....: 832.00 MiB / 902.99 MiB  92.14% 26.40 MiB    > preloaded-images-k8s-v8-v1....: 840.00 MiB / 902.99 MiB  93.02% 26.40 MiB    > preloaded-images-k8s-v8-v1....: 840.00 MiB / 902.99 MiB  93.02% 26.40 MiB    > preloaded-images-k8s-v8-v1....: 840.00 MiB / 902.99 MiB  93.02% 25.56 MiB    > preloaded-images-k8s-v8-v1....: 842.41 MiB / 902.99 MiB  93.29% 25.56 MiB    > preloaded-images-k8s-v8-v1....: 842.41 MiB / 902.99 MiB  93.29% 25.56 MiB    > preloaded-images-k8s-v8-v1....: 842.41 MiB / 902.99 MiB  93.29% 24.17 MiB    > preloaded-images-k8s-v8-v1....: 842.41 MiB / 902.99 MiB  93.29% 24.17 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 24.17 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 23.21 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 23.21 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 23.21 MiB    > preloaded-images-k8s-v8-v1....: 84
8.00 MiB / 902.99 MiB  93.91% 21.71 MiB    > preloaded-images-k8s-v8-v1....: 856.00 MiB / 902.99 MiB  94.80% 21.71 MiB    > preloaded-images-k8s-v8-v1....: 864.00 MiB / 902.99 MiB  95.68% 21.71 MiB    > preloaded-images-k8s-v8-v1....: 864.00 MiB / 902.99 MiB  95.68% 22.03 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 22.03 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 22.03 MiB    > preloaded-images-k8s-v8-v1....: 880.00 MiB / 902.99 MiB  97.45% 22.33 MiB    > preloaded-images-k8s-v8-v1....: 880.00 MiB / 902.99 MiB  97.45% 22.33 MiB    > preloaded-images-k8s-v8-v1....: 888.00 MiB / 902.99 MiB  98.34% 22.33 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 32.77 MiX Unable to load cached images: loading cached images: containerd load /var/lib/minikube/images/kube-scheduler_v1.20.0: ctr images import: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1
	stdout:
	unpacking k8s.gcr.io/kube-scheduler:v1.20.0 (sha256:8830f9f9bb6d745852e23b430f8f073484d4eef5eaecb7b71ea9f56c407cca4a)...
	stderr:
	ctr: failed to prepare extraction snapshot "extract-271686063-RCyk sha256:fcca38158a3c980169363bd65e481197969a86ad0a01637ca87daa8f902b6dbb": failed to create snapshot: missing parent "k8s.io/16/sha256:e7ee84ae4d1363ccf59b14bf34a79c245705dfd55429918b63c754d84c85d904" bucket: not found
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost running-upgrade-20210814093236-6746] and IPs [192.168.70.91 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost running-upgrade-20210814093236-6746] and IPs [192.168.70.91 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.294718574.exe start -p running-upgrade-20210814093236-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0814 09:42:38.025336    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.294718574.exe start -p running-upgrade-20210814093236-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 109 (12m29.556143647s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210814093236-6746] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	  - KUBECONFIG=/tmp/legacy_kubeconfig045864322
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-20210814093236-6746 in cluster running-upgrade-20210814093236-6746
	* Updating the running docker "running-upgrade-20210814093236-6746" container ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	  - Booting up control plane ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	  - Booting up control plane .../ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Unable to load cached images: loading cached images: containerd load /var/lib/minikube/images/kube-proxy_v1.20.0: ctr images import: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1
	stdout:
	unpacking k8s.gcr.io/kube-proxy:v1.20.0 (sha256:aba25eb6f292d83303d5428dec17fa0c82d3b651cd5038124dd086f5ddf8559d)...
	stderr:
	ctr: failed to prepare extraction snapshot "extract-407296225-Ngth sha256:a433e1037016329976cfe693182b08f8b4ef4f399eca92a1a5b1675015e17bf6": failed to create snapshot: missing parent "k8s.io/2/sha256:f00bc8568f7bbf2863db216b90193b921672a923d0295e59d3311a6c9d2b41c8" bucket: not found
	
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.294718574.exe start -p running-upgrade-20210814093236-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.294718574.exe start -p running-upgrade-20210814093236-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 109 (12m17.592786773s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210814093236-6746] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	  - KUBECONFIG=/tmp/legacy_kubeconfig574762500
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-20210814093236-6746 in cluster running-upgrade-20210814093236-6746
	* Updating the running docker "running-upgrade-20210814093236-6746" container ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	  - Booting up control plane ...- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	  - Generating certificates and keys ...\ WW| WW/ WW- WW\ WW| WW/ WW
	  - Booting up control plane ...- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| 
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ W
W- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
[K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Unable to load cached images: loading cached images: containerd load /var/lib/minikube/images/kube-apiserver_v1.20.0: ctr images import: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0: Process exited with status 1
	stdout:
	unpacking k8s.gcr.io/kube-apiserver:v1.20.0 (sha256:7af8b0bd8634d8c4e8faffb7a0d4718f0e3d14a32910611651972c9b6e68bbba)...
	stderr:
	ctr: failed to prepare extraction snapshot "extract-465106116-VyM- sha256:fcca38158a3c980169363bd65e481197969a86ad0a01637ca87daa8f902b6dbb": failed to create snapshot: missing parent "k8s.io/16/sha256:e7ee84ae4d1363ccf59b14bf34a79c245705dfd55429918b63c754d84c85d904" bucket: not found
	
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.16.0 start failed: exit status 109
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-14 10:06:33.443870735 +0000 UTC m=+3715.391783705
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210814093236-6746
helpers_test.go:236: (dbg) docker inspect running-upgrade-20210814093236-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "921b7fcb3759712d978aa57501fad26a32e8f7b25aa315e2bc650ad0a9c15e25",
	        "Created": "2021-08-14T09:33:09.021464491Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 133165,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:33:09.607371241Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/921b7fcb3759712d978aa57501fad26a32e8f7b25aa315e2bc650ad0a9c15e25/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/921b7fcb3759712d978aa57501fad26a32e8f7b25aa315e2bc650ad0a9c15e25/hostname",
	        "HostsPath": "/var/lib/docker/containers/921b7fcb3759712d978aa57501fad26a32e8f7b25aa315e2bc650ad0a9c15e25/hosts",
	        "LogPath": "/var/lib/docker/containers/921b7fcb3759712d978aa57501fad26a32e8f7b25aa315e2bc650ad0a9c15e25/921b7fcb3759712d978aa57501fad26a32e8f7b25aa315e2bc650ad0a9c15e25-json.log",
	        "Name": "/running-upgrade-20210814093236-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20210814093236-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-20210814093236-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/73ea94cb924ec15b9ebec8af7b437ceeee7555efc94aa37a005abde5b387050b-init/diff:/var/lib/docker/overlay2/2cafb81b979c8880da6f5596b32970bb9719655502b9990b62750c618bdcc547/diff:/var/lib/docker/overlay2/0202b62097bc3ddbcd1e97441d3df8cfa0e087d8e5697c7b29c818a377c5524c/diff:/var/lib/docker/overlay2/b28a03234fd586f1acc29f2cfffd121bb0f6a658a9d86801afd469058bfd6e3f/diff:/var/lib/docker/overlay2/c8a621d733d3d29bc776084d08a42f0a6bf35ed6070a6687c5b774fb3e2e4b4c/diff:/var/lib/docker/overlay2/b046431968f9765e372628f2b0da5e27d188508fd7e25b91acb217c290eadc7c/diff:/var/lib/docker/overlay2/0d3083d996e9cbbaecfa5e1ee2ed1328301a030d777f2b50731e115480db3937/diff:/var/lib/docker/overlay2/cfecb5fe5376f9b71357b351b97a8a3acf4db861103cfc9a32249a6ac7ad65a2/diff:/var/lib/docker/overlay2/8a982d24057b6224410aee2c2bf69d7d3e5c80b886d3149bdc5b70fb58ba19a3/diff:/var/lib/docker/overlay2/19119623aee3e3d8548949d7f371508f188423a41c884afdd60783ea3d04dfd2/diff:/var/lib/docker/overlay2/961b0b
fc14d3bc5247a0633321e6ecb35184a8ca04fcb67137d1902b1819b713/diff:/var/lib/docker/overlay2/73d6fffe011f1165eb74933df0ac861a352d5ea4996693b9037d2169a22a1f66/diff:/var/lib/docker/overlay2/ef4c48aec0aaecc0c11e141419b7fecedc8536ab17883e581089dc0db3ca9e26/diff:/var/lib/docker/overlay2/d363cb3f46b497740023a23af335a9625b12d142b5f35e5530bf985d00622edb/diff:/var/lib/docker/overlay2/c4381af3706d60b7007813ae53dfcadb001ac0f70b8bb585ea18299721facd1d/diff:/var/lib/docker/overlay2/4e40b059d193b484168f48dee422fb383ee02819016429fd8447eea041fdd09e/diff:/var/lib/docker/overlay2/e0469e800081a521f89b4d7ef77f395a7ae43d1d0d6c4ff8d51054c96d43c80d/diff:/var/lib/docker/overlay2/d46faeddbc3e71208da0de07cc512604d57ca1fc613a8d2df31ec7e3ffa8bbcc/diff:/var/lib/docker/overlay2/ea32f200adc5f6550940fdcbb034b97208685b0b2ec47603dcff51314c15077b/diff:/var/lib/docker/overlay2/d03ddf12fae7ed09d9310ddbaf63040c51fdb87e24956e85f2c9193fcc72c734/diff:/var/lib/docker/overlay2/9d0e1797e28922126194a6017959ab9fdf0e463f42902eac15f758be7eb84bc0/diff:/var/lib/d
ocker/overlay2/96dcde54edda8d3bc4e47332312d8867426dac4c6cb4159fde74140ba0ce74ca/diff:/var/lib/docker/overlay2/2f6d702518c4d35e2faba54f007e173ed910b2e83666cb264b05a57bb5fcd25d/diff:/var/lib/docker/overlay2/469957e2fac1545e060d00b02f0317930aed4b734e6698f4c8667712fef79b38/diff:/var/lib/docker/overlay2/fbe625b759b982135c13ff05cddd3bd3a86593e14396d4c0bcddaba4ddde2cfd/diff:/var/lib/docker/overlay2/3ea66287d33c09b099f866307aec25187925e50da5c2d6d0d8ae0764e685ef76/diff:/var/lib/docker/overlay2/dca14b80409bf51f98b165460555f187e61252d7d9f901e1856c6d63583edda1/diff:/var/lib/docker/overlay2/605b36a3e74900cb2da8421d3ae76eb61a25ce762d60d54b194033e2288365ee/diff:/var/lib/docker/overlay2/1e8a81657e7689a5d86a791e9a265b99d2c4db0c2c33554965002cb9effc3087/diff:/var/lib/docker/overlay2/c624473413952a48a8cca6a78793a69d8f1098865b29c2ebc10975f346b975ea/diff:/var/lib/docker/overlay2/40576377926bff92326325dd7ca41f32c3b5ee9051f5f6fd95939a1fc0c2bc85/diff:/var/lib/docker/overlay2/08e3e2ff5443f67147ea762a797bbb139746c70cc53a8faf7986f5a19df
009cb/diff:/var/lib/docker/overlay2/c89ee044ab56f8f613a4b3944e0deaeb9bed3ef3a1cd12e131f5ac3afa87d8b7/diff:/var/lib/docker/overlay2/1b4140f71e09964438606dd9d6396c56408c8bcefe0954b534c7bc9b961542ef/diff:/var/lib/docker/overlay2/3252732b3d8ab3c5f4ae2600a2b4ddad1888231a7bef7871ef9b27da11e8861e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73ea94cb924ec15b9ebec8af7b437ceeee7555efc94aa37a005abde5b387050b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73ea94cb924ec15b9ebec8af7b437ceeee7555efc94aa37a005abde5b387050b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73ea94cb924ec15b9ebec8af7b437ceeee7555efc94aa37a005abde5b387050b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20210814093236-6746",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20210814093236-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20210814093236-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20210814093236-6746",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20210814093236-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a17129eee0bf2e13782e768c34ba89f0237ad95c3ff0f379dba2d7830bb58e3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1a17129eee0b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-20210814093236-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.91"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "921b7fcb3759"
	                    ],
	                    "NetworkID": "64e36b20584ce00ed653c19051d0e39a2fc0d30e7540e595984b350f02c739ae",
	                    "EndpointID": "3401c96a6d7849a7613b12914acd0fdf4d254a21eb3014d43022233dc1c5ec43",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.91",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:5b",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210814093236-6746 -n running-upgrade-20210814093236-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210814093236-6746 -n running-upgrade-20210814093236-6746: exit status 6 (302.714669ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 10:06:33.773308  336730 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20210814093236-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "running-upgrade-20210814093236-6746" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "running-upgrade-20210814093236-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210814093236-6746

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210814093236-6746: (2.10394712s)
--- FAIL: TestRunningBinaryUpgrade (2039.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (2041.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.693998509.exe start -p stopped-upgrade-20210814093232-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.693998509.exe start -p stopped-upgrade-20210814093232-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (9m9.223663522s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210814093232-6746] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	  - KUBECONFIG=/tmp/legacy_kubeconfig965562010
	* minikube 1.22.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.22.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Using the docker driver based on user configuration
	* Starting control plane node stopped-upgrade-20210814093232-6746 in cluster stopped-upgrade-20210814093232-6746
	* Downloading Kubernetes v1.20.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 8.00 MiB / 902.99 MiB [>__] 0.89% ? p/s ?    > preloaded-images-k8s-v8-v1....: 16.00 MiB / 902.99 MiB [>_] 1.77% ? p/s ?    > preloaded-images-k8s-v8-v1....: 16.00 MiB / 902.99 MiB [>_] 1.77% ? p/s ?    > preloaded-images-k8s-v8-v1....: 32.00 MiB / 902.99 MiB  3.54% 40.00 MiB p    > preloaded-images-k8s-v8-v1....: 36.15 MiB / 902.99 MiB  4.00% 40.00 MiB p    > preloaded-images-k8s-v8-v1....: 48.00 MiB / 902.99 MiB  5.32% 40.00 MiB p    > preloaded-images-k8s-v8-v1....: 64.00 MiB / 902.99 MiB  7.09% 40.86 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 40.86 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 40.86 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 39.08 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 39.08 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 39.08 MiB p    > preloaded-images-k8s-v8-v1....: 96.00 MiB / 902.99 MiB  10.63
% 39.14 MiB     > preloaded-images-k8s-v8-v1....: 104.00 MiB / 902.99 MiB  11.52% 39.14 MiB    > preloaded-images-k8s-v8-v1....: 120.00 MiB / 902.99 MiB  13.29% 39.14 MiB    > preloaded-images-k8s-v8-v1....: 120.00 MiB / 902.99 MiB  13.29% 39.20 MiB    > preloaded-images-k8s-v8-v1....: 120.00 MiB / 902.99 MiB  13.29% 39.20 MiB    > preloaded-images-k8s-v8-v1....: 120.00 MiB / 902.99 MiB  13.29% 39.20 MiB    > preloaded-images-k8s-v8-v1....: 152.00 MiB / 902.99 MiB  16.83% 40.11 MiB    > preloaded-images-k8s-v8-v1....: 152.00 MiB / 902.99 MiB  16.83% 40.11 MiB    > preloaded-images-k8s-v8-v1....: 168.00 MiB / 902.99 MiB  18.60% 40.11 MiB    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 902.99 MiB  21.26% 41.82 MiB    > preloaded-images-k8s-v8-v1....: 199.87 MiB / 902.99 MiB  22.13% 41.82 MiB    > preloaded-images-k8s-v8-v1....: 207.94 MiB / 902.99 MiB  23.03% 41.82 MiB    > preloaded-images-k8s-v8-v1....: 216.00 MiB / 902.99 MiB  23.92% 41.71 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  2
5.69% 41.71 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 41.71 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 40.74 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 40.74 MiB    > preloaded-images-k8s-v8-v1....: 232.11 MiB / 902.99 MiB  25.70% 40.74 MiB    > preloaded-images-k8s-v8-v1....: 264.00 MiB / 902.99 MiB  29.24% 41.55 MiB    > preloaded-images-k8s-v8-v1....: 268.84 MiB / 902.99 MiB  29.77% 41.55 MiB    > preloaded-images-k8s-v8-v1....: 288.00 MiB / 902.99 MiB  31.89% 41.55 MiB    > preloaded-images-k8s-v8-v1....: 296.00 MiB / 902.99 MiB  32.78% 42.31 MiB    > preloaded-images-k8s-v8-v1....: 296.00 MiB / 902.99 MiB  32.78% 42.31 MiB    > preloaded-images-k8s-v8-v1....: 296.00 MiB / 902.99 MiB  32.78% 42.31 MiB    > preloaded-images-k8s-v8-v1....: 296.20 MiB / 902.99 MiB  32.80% 39.60 MiB    > preloaded-images-k8s-v8-v1....: 328.00 MiB / 902.99 MiB  36.32% 39.60 MiB    > preloaded-images-k8s-v8-v1....: 336.00 MiB / 902.99 MiB
37.21% 39.60 MiB    > preloaded-images-k8s-v8-v1....: 336.00 MiB / 902.99 MiB  37.21% 41.33 MiB    > preloaded-images-k8s-v8-v1....: 344.00 MiB / 902.99 MiB  38.10% 41.33 MiB    > preloaded-images-k8s-v8-v1....: 351.15 MiB / 902.99 MiB  38.89% 41.33 MiB    > preloaded-images-k8s-v8-v1....: 368.00 MiB / 902.99 MiB  40.75% 42.10 MiB    > preloaded-images-k8s-v8-v1....: 376.00 MiB / 902.99 MiB  41.64% 42.10 MiB    > preloaded-images-k8s-v8-v1....: 376.00 MiB / 902.99 MiB  41.64% 42.10 MiB    > preloaded-images-k8s-v8-v1....: 384.00 MiB / 902.99 MiB  42.53% 41.10 MiB    > preloaded-images-k8s-v8-v1....: 384.00 MiB / 902.99 MiB  42.53% 41.10 MiB    > preloaded-images-k8s-v8-v1....: 384.00 MiB / 902.99 MiB  42.53% 41.10 MiB    > preloaded-images-k8s-v8-v1....: 384.01 MiB / 902.99 MiB  42.53% 38.45 MiB    > preloaded-images-k8s-v8-v1....: 384.01 MiB / 902.99 MiB  42.53% 38.45 MiB    > preloaded-images-k8s-v8-v1....: 384.01 MiB / 902.99 MiB  42.53% 38.45 MiB    > preloaded-images-k8s-v8-v1....: 408.00 MiB / 902.99
MiB  45.18% 38.55 MiB    > preloaded-images-k8s-v8-v1....: 408.00 MiB / 902.99 MiB  45.18% 38.55 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 38.55 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 36.93 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 36.93 MiB    > preloaded-images-k8s-v8-v1....: 416.01 MiB / 902.99 MiB  46.07% 36.93 MiB    > preloaded-images-k8s-v8-v1....: 416.01 MiB / 902.99 MiB  46.07% 34.54 MiB    > preloaded-images-k8s-v8-v1....: 436.12 MiB / 902.99 MiB  48.30% 34.54 MiB    > preloaded-images-k8s-v8-v1....: 456.00 MiB / 902.99 MiB  50.50% 34.54 MiB    > preloaded-images-k8s-v8-v1....: 456.00 MiB / 902.99 MiB  50.50% 36.62 MiB    > preloaded-images-k8s-v8-v1....: 456.00 MiB / 902.99 MiB  50.50% 36.62 MiB    > preloaded-images-k8s-v8-v1....: 456.66 MiB / 902.99 MiB  50.57% 36.62 MiB    > preloaded-images-k8s-v8-v1....: 477.07 MiB / 902.99 MiB  52.83% 36.52 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 902.
99 MiB  55.81% 36.52 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 902.99 MiB  55.81% 36.52 MiB    > preloaded-images-k8s-v8-v1....: 514.23 MiB / 902.99 MiB  56.95% 38.16 MiB    > preloaded-images-k8s-v8-v1....: 544.00 MiB / 902.99 MiB  60.24% 38.16 MiB    > preloaded-images-k8s-v8-v1....: 552.00 MiB / 902.99 MiB  61.13% 38.16 MiB    > preloaded-images-k8s-v8-v1....: 552.00 MiB / 902.99 MiB  61.13% 39.76 MiB    > preloaded-images-k8s-v8-v1....: 552.00 MiB / 902.99 MiB  61.13% 39.76 MiB    > preloaded-images-k8s-v8-v1....: 594.90 MiB / 902.99 MiB  65.88% 39.76 MiB    > preloaded-images-k8s-v8-v1....: 600.01 MiB / 902.99 MiB  66.45% 42.35 MiB    > preloaded-images-k8s-v8-v1....: 600.01 MiB / 902.99 MiB  66.45% 42.35 MiB    > preloaded-images-k8s-v8-v1....: 600.01 MiB / 902.99 MiB  66.45% 42.35 MiB    > preloaded-images-k8s-v8-v1....: 600.01 MiB / 902.99 MiB  66.45% 39.62 MiB    > preloaded-images-k8s-v8-v1....: 600.01 MiB / 902.99 MiB  66.45% 39.62 MiB    > preloaded-images-k8s-v8-v1....: 611.36 MiB / 9
02.99 MiB  67.70% 39.62 MiB    > preloaded-images-k8s-v8-v1....: 648.00 MiB / 902.99 MiB  71.76% 42.23 MiB    > preloaded-images-k8s-v8-v1....: 664.20 MiB / 902.99 MiB  73.56% 42.23 MiB    > preloaded-images-k8s-v8-v1....: 672.00 MiB / 902.99 MiB  74.42% 42.23 MiB    > preloaded-images-k8s-v8-v1....: 688.00 MiB / 902.99 MiB  76.19% 43.80 MiB    > preloaded-images-k8s-v8-v1....: 696.00 MiB / 902.99 MiB  77.08% 43.80 MiB    > preloaded-images-k8s-v8-v1....: 696.00 MiB / 902.99 MiB  77.08% 43.80 MiB    > preloaded-images-k8s-v8-v1....: 696.00 MiB / 902.99 MiB  77.08% 41.84 MiB    > preloaded-images-k8s-v8-v1....: 712.00 MiB / 902.99 MiB  78.85% 41.84 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 41.84 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 40.86 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 40.86 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB / 902.99 MiB  78.85% 40.86 MiB    > preloaded-images-k8s-v8-v1....: 712.01 MiB
/ 902.99 MiB  78.85% 38.22 MiB    > preloaded-images-k8s-v8-v1....: 718.55 MiB / 902.99 MiB  79.57% 38.22 MiB    > preloaded-images-k8s-v8-v1....: 718.55 MiB / 902.99 MiB  79.57% 38.22 MiB    > preloaded-images-k8s-v8-v1....: 718.55 MiB / 902.99 MiB  79.57% 36.46 MiB    > preloaded-images-k8s-v8-v1....: 718.55 MiB / 902.99 MiB  79.57% 36.46 MiB    > preloaded-images-k8s-v8-v1....: 718.55 MiB / 902.99 MiB  79.57% 36.46 MiB    > preloaded-images-k8s-v8-v1....: 718.57 MiB / 902.99 MiB  79.58% 34.11 MiB    > preloaded-images-k8s-v8-v1....: 720.05 MiB / 902.99 MiB  79.74% 34.11 MiB    > preloaded-images-k8s-v8-v1....: 725.41 MiB / 902.99 MiB  80.33% 34.11 MiB    > preloaded-images-k8s-v8-v1....: 725.41 MiB / 902.99 MiB  80.33% 32.65 MiB    > preloaded-images-k8s-v8-v1....: 731.06 MiB / 902.99 MiB  80.96% 32.65 MiB    > preloaded-images-k8s-v8-v1....: 731.06 MiB / 902.99 MiB  80.96% 32.65 MiB    > preloaded-images-k8s-v8-v1....: 731.06 MiB / 902.99 MiB  80.96% 31.15 MiB    > preloaded-images-k8s-v8-v1....: 739.19 M
iB / 902.99 MiB  81.86% 31.15 MiB    > preloaded-images-k8s-v8-v1....: 739.19 MiB / 902.99 MiB  81.86% 31.15 MiB    > preloaded-images-k8s-v8-v1....: 739.19 MiB / 902.99 MiB  81.86% 30.01 MiB    > preloaded-images-k8s-v8-v1....: 739.19 MiB / 902.99 MiB  81.86% 30.01 MiB    > preloaded-images-k8s-v8-v1....: 746.80 MiB / 902.99 MiB  82.70% 30.01 MiB    > preloaded-images-k8s-v8-v1....: 746.80 MiB / 902.99 MiB  82.70% 28.89 MiB    > preloaded-images-k8s-v8-v1....: 746.80 MiB / 902.99 MiB  82.70% 28.89 MiB    > preloaded-images-k8s-v8-v1....: 746.80 MiB / 902.99 MiB  82.70% 28.89 MiB    > preloaded-images-k8s-v8-v1....: 764.43 MiB / 902.99 MiB  84.65% 28.92 MiB    > preloaded-images-k8s-v8-v1....: 764.43 MiB / 902.99 MiB  84.65% 28.92 MiB    > preloaded-images-k8s-v8-v1....: 769.31 MiB / 902.99 MiB  85.20% 28.92 MiB    > preloaded-images-k8s-v8-v1....: 771.34 MiB / 902.99 MiB  85.42% 27.80 MiB    > preloaded-images-k8s-v8-v1....: 776.95 MiB / 902.99 MiB  86.04% 27.80 MiB    > preloaded-images-k8s-v8-v1....: 778.9
5 MiB / 902.99 MiB  86.26% 27.80 MiB    > preloaded-images-k8s-v8-v1....: 778.95 MiB / 902.99 MiB  86.26% 26.83 MiB    > preloaded-images-k8s-v8-v1....: 785.30 MiB / 902.99 MiB  86.97% 26.83 MiB    > preloaded-images-k8s-v8-v1....: 785.30 MiB / 902.99 MiB  86.97% 26.83 MiB    > preloaded-images-k8s-v8-v1....: 785.30 MiB / 902.99 MiB  86.97% 25.78 MiB    > preloaded-images-k8s-v8-v1....: 787.30 MiB / 902.99 MiB  87.19% 25.78 MiB    > preloaded-images-k8s-v8-v1....: 794.26 MiB / 902.99 MiB  87.96% 25.78 MiB    > preloaded-images-k8s-v8-v1....: 794.26 MiB / 902.99 MiB  87.96% 25.08 MiB    > preloaded-images-k8s-v8-v1....: 794.26 MiB / 902.99 MiB  87.96% 25.08 MiB    > preloaded-images-k8s-v8-v1....: 801.66 MiB / 902.99 MiB  88.78% 25.08 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 24.94 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 24.94 MiB    > preloaded-images-k8s-v8-v1....: 808.00 MiB / 902.99 MiB  89.48% 24.94 MiB    > preloaded-images-k8s-v8-v1....: 80
8.00 MiB / 902.99 MiB  89.48% 23.33 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 23.33 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 23.33 MiB    > preloaded-images-k8s-v8-v1....: 824.00 MiB / 902.99 MiB  91.25% 23.54 MiB    > preloaded-images-k8s-v8-v1....: 824.00 MiB / 902.99 MiB  91.25% 23.54 MiB    > preloaded-images-k8s-v8-v1....: 824.00 MiB / 902.99 MiB  91.25% 23.54 MiB    > preloaded-images-k8s-v8-v1....: 825.92 MiB / 902.99 MiB  91.47% 22.23 MiB    > preloaded-images-k8s-v8-v1....: 832.00 MiB / 902.99 MiB  92.14% 22.23 MiB    > preloaded-images-k8s-v8-v1....: 832.00 MiB / 902.99 MiB  92.14% 22.23 MiB    > preloaded-images-k8s-v8-v1....: 832.00 MiB / 902.99 MiB  92.14% 21.45 MiB    > preloaded-images-k8s-v8-v1....: 832.00 MiB / 902.99 MiB  92.14% 21.45 MiB    > preloaded-images-k8s-v8-v1....: 840.00 MiB / 902.99 MiB  93.02% 21.45 MiB    > preloaded-images-k8s-v8-v1....: 840.00 MiB / 902.99 MiB  93.02% 20.93 MiB    > preloaded-images-k8s-v8-v1....:
844.37 MiB / 902.99 MiB  93.51% 20.93 MiB    > preloaded-images-k8s-v8-v1....: 844.37 MiB / 902.99 MiB  93.51% 20.93 MiB    > preloaded-images-k8s-v8-v1....: 844.37 MiB / 902.99 MiB  93.51% 20.05 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 20.05 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 20.05 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 19.14 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 19.14 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 19.14 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 17.91 MiB    > preloaded-images-k8s-v8-v1....: 859.65 MiB / 902.99 MiB  95.20% 17.91 MiB    > preloaded-images-k8s-v8-v1....: 864.05 MiB / 902.99 MiB  95.69% 17.91 MiB    > preloaded-images-k8s-v8-v1....: 872.00 MiB / 902.99 MiB  96.57% 19.33 MiB    > preloaded-images-k8s-v8-v1....: 880.00 MiB / 902.99 MiB  97.45% 19.33 MiB    > preloaded-images-k8s-v8-v1..
..: 880.00 MiB / 902.99 MiB  97.45% 19.33 MiB    > preloaded-images-k8s-v8-v1....: 880.00 MiB / 902.99 MiB  97.45% 18.95 MiB    > preloaded-images-k8s-v8-v1....: 880.00 MiB / 902.99 MiB  97.45% 18.95 MiB    > preloaded-images-k8s-v8-v1....: 888.00 MiB / 902.99 MiB  98.34% 18.95 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 28.49 MiX Unable to load cached images: loading cached images: containerd load /var/lib/minikube/images/kube-scheduler_v1.20.0: ctr images import: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1
	stdout:
	unpacking k8s.gcr.io/kube-scheduler:v1.20.0 (sha256:8830f9f9bb6d745852e23b430f8f073484d4eef5eaecb7b71ea9f56c407cca4a)...
	stderr:
	ctr: failed to prepare extraction snapshot "extract-207920785-j9NS sha256:fcca38158a3c980169363bd65e481197969a86ad0a01637ca87daa8f902b6dbb": failed to create snapshot: missing parent "k8s.io/16/sha256:e7ee84ae4d1363ccf59b14bf34a79c245705dfd55429918b63c754d84c85d904" bucket: not found
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost stopped-upgrade-20210814093232-6746] and IPs [192.168.59.91 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost stopped-upgrade-20210814093232-6746] and IPs [192.168.59.91 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: missing optional cgroups: hugetlb
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: missing optional cgroups: hugetlb
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.693998509.exe start -p stopped-upgrade-20210814093232-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.693998509.exe start -p stopped-upgrade-20210814093232-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (12m27.876472415s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210814093232-6746] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	  - KUBECONFIG=/tmp/legacy_kubeconfig930657519
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-20210814093232-6746 in cluster stopped-upgrade-20210814093232-6746
	* Updating the running docker "stopped-upgrade-20210814093232-6746" container ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW

                                                
                                                
-- /stdout --
** stderr ** 
	X Unable to load cached images: loading cached images: containerd load /var/lib/minikube/images/kube-proxy_v1.20.0: ctr images import: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1
	stdout:
	unpacking k8s.gcr.io/kube-proxy:v1.20.0 (sha256:aba25eb6f292d83303d5428dec17fa0c82d3b651cd5038124dd086f5ddf8559d)...
	stderr:
	ctr: failed to prepare extraction snapshot "extract-149196621-9x6E sha256:a433e1037016329976cfe693182b08f8b4ef4f399eca92a1a5b1675015e17bf6": failed to create snapshot: missing parent "k8s.io/2/sha256:f00bc8568f7bbf2863db216b90193b921672a923d0295e59d3311a6c9d2b41c8" bucket: not found
	
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: missing optional cgroups: hugetlb
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: missing optional cgroups: hugetlb
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.693998509.exe start -p stopped-upgrade-20210814093232-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.693998509.exe start -p stopped-upgrade-20210814093232-6746 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 80 (12m18.781207432s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210814093232-6746] minikube v1.16.0 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	  - KUBECONFIG=/tmp/legacy_kubeconfig115295481
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-20210814093232-6746 in cluster stopped-upgrade-20210814093232-6746
	* Updating the running docker "stopped-upgrade-20210814093232-6746" container ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Generating certificates and keys ...| WW

                                                
                                                
-- /stdout --
** stderr ** 
	X Unable to load cached images: loading cached images: containerd load /var/lib/minikube/images/kube-controller-manager_v1.20.0: ctr images import: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.20.0: Process exited with status 1
	stdout:
	unpacking k8s.gcr.io/kube-controller-manager:v1.20.0 (sha256:a772f562b5bf05d0d160bfd2f3a6dc09496f6c8a44dc777002277546e138da07)...
	stderr:
	ctr: failed to prepare extraction snapshot "extract-815647394-aQ6g sha256:fcca38158a3c980169363bd65e481197969a86ad0a01637ca87daa8f902b6dbb": failed to create snapshot: missing parent "k8s.io/16/sha256:e7ee84ae4d1363ccf59b14bf34a79c245705dfd55429918b63c754d84c85d904" bucket: not found
	
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-16-amd64
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: missing
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: missing optional cgroups: hugetlb
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: missing optional cgroups: hugetlb
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:192: legacy v1.16.0 start failed: exit status 80
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-14 10:06:31.74025819 +0000 UTC m=+3713.688171145
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210814093232-6746
helpers_test.go:236: (dbg) docker inspect stopped-upgrade-20210814093232-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e45fddaaeb1595d806348ff524e279313ec02e95575c18fb6f9de778a2ffe0eb",
	        "Created": "2021-08-14T09:33:07.443883542Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 132193,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:33:08.034777925Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/e45fddaaeb1595d806348ff524e279313ec02e95575c18fb6f9de778a2ffe0eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e45fddaaeb1595d806348ff524e279313ec02e95575c18fb6f9de778a2ffe0eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/e45fddaaeb1595d806348ff524e279313ec02e95575c18fb6f9de778a2ffe0eb/hosts",
	        "LogPath": "/var/lib/docker/containers/e45fddaaeb1595d806348ff524e279313ec02e95575c18fb6f9de778a2ffe0eb/e45fddaaeb1595d806348ff524e279313ec02e95575c18fb6f9de778a2ffe0eb-json.log",
	        "Name": "/stopped-upgrade-20210814093232-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "stopped-upgrade-20210814093232-6746:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "stopped-upgrade-20210814093232-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8aa956c71068ed60966b38e22625f6aea7ef49ed68e30d98135292172526eef-init/diff:/var/lib/docker/overlay2/2cafb81b979c8880da6f5596b32970bb9719655502b9990b62750c618bdcc547/diff:/var/lib/docker/overlay2/0202b62097bc3ddbcd1e97441d3df8cfa0e087d8e5697c7b29c818a377c5524c/diff:/var/lib/docker/overlay2/b28a03234fd586f1acc29f2cfffd121bb0f6a658a9d86801afd469058bfd6e3f/diff:/var/lib/docker/overlay2/c8a621d733d3d29bc776084d08a42f0a6bf35ed6070a6687c5b774fb3e2e4b4c/diff:/var/lib/docker/overlay2/b046431968f9765e372628f2b0da5e27d188508fd7e25b91acb217c290eadc7c/diff:/var/lib/docker/overlay2/0d3083d996e9cbbaecfa5e1ee2ed1328301a030d777f2b50731e115480db3937/diff:/var/lib/docker/overlay2/cfecb5fe5376f9b71357b351b97a8a3acf4db861103cfc9a32249a6ac7ad65a2/diff:/var/lib/docker/overlay2/8a982d24057b6224410aee2c2bf69d7d3e5c80b886d3149bdc5b70fb58ba19a3/diff:/var/lib/docker/overlay2/19119623aee3e3d8548949d7f371508f188423a41c884afdd60783ea3d04dfd2/diff:/var/lib/docker/overlay2/961b0b
fc14d3bc5247a0633321e6ecb35184a8ca04fcb67137d1902b1819b713/diff:/var/lib/docker/overlay2/73d6fffe011f1165eb74933df0ac861a352d5ea4996693b9037d2169a22a1f66/diff:/var/lib/docker/overlay2/ef4c48aec0aaecc0c11e141419b7fecedc8536ab17883e581089dc0db3ca9e26/diff:/var/lib/docker/overlay2/d363cb3f46b497740023a23af335a9625b12d142b5f35e5530bf985d00622edb/diff:/var/lib/docker/overlay2/c4381af3706d60b7007813ae53dfcadb001ac0f70b8bb585ea18299721facd1d/diff:/var/lib/docker/overlay2/4e40b059d193b484168f48dee422fb383ee02819016429fd8447eea041fdd09e/diff:/var/lib/docker/overlay2/e0469e800081a521f89b4d7ef77f395a7ae43d1d0d6c4ff8d51054c96d43c80d/diff:/var/lib/docker/overlay2/d46faeddbc3e71208da0de07cc512604d57ca1fc613a8d2df31ec7e3ffa8bbcc/diff:/var/lib/docker/overlay2/ea32f200adc5f6550940fdcbb034b97208685b0b2ec47603dcff51314c15077b/diff:/var/lib/docker/overlay2/d03ddf12fae7ed09d9310ddbaf63040c51fdb87e24956e85f2c9193fcc72c734/diff:/var/lib/docker/overlay2/9d0e1797e28922126194a6017959ab9fdf0e463f42902eac15f758be7eb84bc0/diff:/var/lib/d
ocker/overlay2/96dcde54edda8d3bc4e47332312d8867426dac4c6cb4159fde74140ba0ce74ca/diff:/var/lib/docker/overlay2/2f6d702518c4d35e2faba54f007e173ed910b2e83666cb264b05a57bb5fcd25d/diff:/var/lib/docker/overlay2/469957e2fac1545e060d00b02f0317930aed4b734e6698f4c8667712fef79b38/diff:/var/lib/docker/overlay2/fbe625b759b982135c13ff05cddd3bd3a86593e14396d4c0bcddaba4ddde2cfd/diff:/var/lib/docker/overlay2/3ea66287d33c09b099f866307aec25187925e50da5c2d6d0d8ae0764e685ef76/diff:/var/lib/docker/overlay2/dca14b80409bf51f98b165460555f187e61252d7d9f901e1856c6d63583edda1/diff:/var/lib/docker/overlay2/605b36a3e74900cb2da8421d3ae76eb61a25ce762d60d54b194033e2288365ee/diff:/var/lib/docker/overlay2/1e8a81657e7689a5d86a791e9a265b99d2c4db0c2c33554965002cb9effc3087/diff:/var/lib/docker/overlay2/c624473413952a48a8cca6a78793a69d8f1098865b29c2ebc10975f346b975ea/diff:/var/lib/docker/overlay2/40576377926bff92326325dd7ca41f32c3b5ee9051f5f6fd95939a1fc0c2bc85/diff:/var/lib/docker/overlay2/08e3e2ff5443f67147ea762a797bbb139746c70cc53a8faf7986f5a19df
009cb/diff:/var/lib/docker/overlay2/c89ee044ab56f8f613a4b3944e0deaeb9bed3ef3a1cd12e131f5ac3afa87d8b7/diff:/var/lib/docker/overlay2/1b4140f71e09964438606dd9d6396c56408c8bcefe0954b534c7bc9b961542ef/diff:/var/lib/docker/overlay2/3252732b3d8ab3c5f4ae2600a2b4ddad1888231a7bef7871ef9b27da11e8861e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8aa956c71068ed60966b38e22625f6aea7ef49ed68e30d98135292172526eef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8aa956c71068ed60966b38e22625f6aea7ef49ed68e30d98135292172526eef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8aa956c71068ed60966b38e22625f6aea7ef49ed68e30d98135292172526eef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "stopped-upgrade-20210814093232-6746",
	                "Source": "/var/lib/docker/volumes/stopped-upgrade-20210814093232-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "stopped-upgrade-20210814093232-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "stopped-upgrade-20210814093232-6746",
	                "name.minikube.sigs.k8s.io": "stopped-upgrade-20210814093232-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1bdb39acf8a048590ef5706b97c0789837bf229213216e0dd461a47de3279bb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1bdb39acf8a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "stopped-upgrade-20210814093232-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.91"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e45fddaaeb15"
	                    ],
	                    "NetworkID": "ecdf6dc3485d4c171722ff81f4b65324ef53ddc17e6a06922623087094ae2578",
	                    "EndpointID": "6dfcb649567b08201d355b772d56517c2a7edd534ab61a802518b5afbffc3a36",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.91",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:5b",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210814093232-6746 -n stopped-upgrade-20210814093232-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210814093232-6746 -n stopped-upgrade-20210814093232-6746: exit status 6 (303.536634ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 10:06:32.073429  336276 status.go:413] kubeconfig endpoint: extract IP: "stopped-upgrade-20210814093232-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210814093232-6746" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210814093232-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20210814093232-6746

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210814093232-6746: (2.166552456s)
--- FAIL: TestStoppedBinaryUpgrade (2041.74s)

                                                
                                    
x
+
TestPause/serial/Pause (116.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210814093545-6746 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210814093545-6746 --alsologtostderr -v=5: exit status 80 (1.88402961s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210814093545-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:37:18.627768  164716 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:37:18.627860  164716 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:18.627872  164716 out.go:311] Setting ErrFile to fd 2...
	I0814 09:37:18.627876  164716 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:18.628048  164716 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:37:18.628261  164716 out.go:305] Setting JSON to false
	I0814 09:37:18.628287  164716 mustload.go:65] Loading cluster: pause-20210814093545-6746
	I0814 09:37:18.628598  164716 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:37:18.629016  164716 cli_runner.go:115] Run: docker container inspect pause-20210814093545-6746 --format={{.State.Status}}
	I0814 09:37:18.669718  164716 host.go:66] Checking if "pause-20210814093545-6746" exists ...
	I0814 09:37:18.670397  164716 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210814093545-6746 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0814 09:37:18.672454  164716 out.go:177] * Pausing node pause-20210814093545-6746 ... 
	I0814 09:37:18.672483  164716 host.go:66] Checking if "pause-20210814093545-6746" exists ...
	I0814 09:37:18.672738  164716 ssh_runner.go:149] Run: systemctl --version
	I0814 09:37:18.672775  164716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210814093545-6746
	I0814 09:37:18.711140  164716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/pause-20210814093545-6746/id_rsa Username:docker}
	I0814 09:37:18.808441  164716 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:37:18.817522  164716 pause.go:50] kubelet running: true
	I0814 09:37:18.817580  164716 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:37:18.937114  164716 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:37:18.937204  164716 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:37:19.019986  164716 cri.go:76] found id: "a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564"
	I0814 09:37:19.020018  164716 cri.go:76] found id: "ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431"
	I0814 09:37:19.020028  164716 cri.go:76] found id: "9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38"
	I0814 09:37:19.020034  164716 cri.go:76] found id: "66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97"
	I0814 09:37:19.020040  164716 cri.go:76] found id: "0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216"
	I0814 09:37:19.020046  164716 cri.go:76] found id: "8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed"
	I0814 09:37:19.020052  164716 cri.go:76] found id: "d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119"
	I0814 09:37:19.020057  164716 cri.go:76] found id: "ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086"
	I0814 09:37:19.020063  164716 cri.go:76] found id: ""
	I0814 09:37:19.020111  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:37:19.057811  164716 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","pid":2531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22/rootfs","created":"2021-08-14T09:37:18.1369696Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_80eca970-b4ab-4ac8-af20-f814411672fb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216","pid":1158,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d
0a6aac05dea8ef621aa6bb0216","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216/rootfs","created":"2021-08-14T09:36:14.761009913Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","pid":1016,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171/rootfs","created":"2021-08-14T09:36:14.461002641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff4
7a7f5fc45d171","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210814093545-6746_f11ebb5af93764eea1676b8a16cd11fe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","pid":1593,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7/rootfs","created":"2021-08-14T09:36:36.720976391Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zgc2h_2b76115f-19df-4554-87f1-b88734b7e601"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","pid":1642,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97/rootfs","created":"2021-08-14T09:36:36.901095693Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6/rootfs","created":"2021-08-14T09:36:14.461001587Z","annotations":{"io.kubernetes.cri.container-type":
"sandbox","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210814093545-6746_f38a3f341ed6042c55d7f17229a2a5a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","pid":1928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca/rootfs","created":"2021-08-14T09:36:54.052967447Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-7njgj_5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4"},"owner":"root"},{"o
ciVersion":"1.0.2-dev","id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","pid":1005,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3/rootfs","created":"2021-08-14T09:36:14.436983353Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210814093545-6746_def6f5caa1dfaea021514c05e476f85c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","pid":1157,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","rootfs":"/run/
containerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed/rootfs","created":"2021-08-14T09:36:14.760946456Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","pid":1778,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38/rootfs","created":"2021-08-14T09:36:37.404999146Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b
20"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564","pid":2562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564/rootfs","created":"2021-08-14T09:37:18.352994908Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d
92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086/rootfs","created":"2021-08-14T09:36:14.653021099Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","pid":1116,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119/rootfs","created":"2021-08-14T09:36:14.708999492Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd"},"owner":"root"},{"ociVersion":"1.0.2-dev","
id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","pid":1601,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20/rootfs","created":"2021-08-14T09:36:37.001965799Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-tbw9g_35667363-ef4b-4333-af82-ae0a5645f03c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","pid":1960,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io
/ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431/rootfs","created":"2021-08-14T09:36:54.260969809Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","pid":1012,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd/rootfs","created":"2021-08-14T09:36:14.460995721Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-2
0210814093545-6746_a45cdcfbe723180b68e8cf5ee8920aa4"},"owner":"root"}]
	I0814 09:37:19.058027  164716 cri.go:113] list returned 16 containers
	I0814 09:37:19.058038  164716 cri.go:116] container: {ID:0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 Status:running}
	I0814 09:37:19.058062  164716 cri.go:118] skipping 0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 - not in ps
	I0814 09:37:19.058066  164716 cri.go:116] container: {ID:0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 Status:running}
	I0814 09:37:19.058071  164716 cri.go:116] container: {ID:60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 Status:running}
	I0814 09:37:19.058075  164716 cri.go:118] skipping 60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 - not in ps
	I0814 09:37:19.058079  164716 cri.go:116] container: {ID:63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 Status:running}
	I0814 09:37:19.058083  164716 cri.go:118] skipping 63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 - not in ps
	I0814 09:37:19.058087  164716 cri.go:116] container: {ID:66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97 Status:running}
	I0814 09:37:19.058092  164716 cri.go:116] container: {ID:74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 Status:running}
	I0814 09:37:19.058099  164716 cri.go:118] skipping 74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 - not in ps
	I0814 09:37:19.058102  164716 cri.go:116] container: {ID:79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca Status:running}
	I0814 09:37:19.058110  164716 cri.go:118] skipping 79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca - not in ps
	I0814 09:37:19.058113  164716 cri.go:116] container: {ID:7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 Status:running}
	I0814 09:37:19.058121  164716 cri.go:118] skipping 7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 - not in ps
	I0814 09:37:19.058124  164716 cri.go:116] container: {ID:8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed Status:running}
	I0814 09:37:19.058128  164716 cri.go:116] container: {ID:9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38 Status:running}
	I0814 09:37:19.058132  164716 cri.go:116] container: {ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564 Status:running}
	I0814 09:37:19.058136  164716 cri.go:116] container: {ID:ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086 Status:running}
	I0814 09:37:19.058143  164716 cri.go:116] container: {ID:d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119 Status:running}
	I0814 09:37:19.058150  164716 cri.go:116] container: {ID:e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 Status:running}
	I0814 09:37:19.058156  164716 cri.go:118] skipping e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 - not in ps
	I0814 09:37:19.058163  164716 cri.go:116] container: {ID:ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431 Status:running}
	I0814 09:37:19.058167  164716 cri.go:116] container: {ID:faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd Status:running}
	I0814 09:37:19.058175  164716 cri.go:118] skipping faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd - not in ps
	I0814 09:37:19.058238  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216
	I0814 09:37:19.073009  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97
	I0814 09:37:19.085378  164716 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:37:19Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:37:19.361816  164716 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:37:19.371957  164716 pause.go:50] kubelet running: false
	I0814 09:37:19.371999  164716 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:37:19.465225  164716 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:37:19.465317  164716 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:37:19.549814  164716 cri.go:76] found id: "a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564"
	I0814 09:37:19.549843  164716 cri.go:76] found id: "ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431"
	I0814 09:37:19.549851  164716 cri.go:76] found id: "9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38"
	I0814 09:37:19.549857  164716 cri.go:76] found id: "66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97"
	I0814 09:37:19.549862  164716 cri.go:76] found id: "0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216"
	I0814 09:37:19.549868  164716 cri.go:76] found id: "8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed"
	I0814 09:37:19.549874  164716 cri.go:76] found id: "d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119"
	I0814 09:37:19.549880  164716 cri.go:76] found id: "ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086"
	I0814 09:37:19.549886  164716 cri.go:76] found id: ""
	I0814 09:37:19.549928  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:37:19.589438  164716 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","pid":2531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22/rootfs","created":"2021-08-14T09:37:18.1369696Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_80eca970-b4ab-4ac8-af20-f814411672fb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216","pid":1158,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d0
a6aac05dea8ef621aa6bb0216","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216/rootfs","created":"2021-08-14T09:36:14.761009913Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","pid":1016,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171/rootfs","created":"2021-08-14T09:36:14.461002641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47
a7f5fc45d171","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210814093545-6746_f11ebb5af93764eea1676b8a16cd11fe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","pid":1593,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7/rootfs","created":"2021-08-14T09:36:36.720976391Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zgc2h_2b76115f-19df-4554-87f1-b88734b7e601"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","pid":1642,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97/rootfs","created":"2021-08-14T09:36:36.901095693Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6/rootfs","created":"2021-08-14T09:36:14.461001587Z","annotations":{"io.kubernetes.cri.container-type":"
sandbox","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210814093545-6746_f38a3f341ed6042c55d7f17229a2a5a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","pid":1928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca/rootfs","created":"2021-08-14T09:36:54.052967447Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-7njgj_5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4"},"owner":"root"},{"oc
iVersion":"1.0.2-dev","id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","pid":1005,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3/rootfs","created":"2021-08-14T09:36:14.436983353Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210814093545-6746_def6f5caa1dfaea021514c05e476f85c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","pid":1157,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","rootfs":"/run/c
ontainerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed/rootfs","created":"2021-08-14T09:36:14.760946456Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","pid":1778,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38/rootfs","created":"2021-08-14T09:36:37.404999146Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b2
0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564","pid":2562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564/rootfs","created":"2021-08-14T09:37:18.352994908Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d9
2f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086/rootfs","created":"2021-08-14T09:36:14.653021099Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","pid":1116,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119/rootfs","created":"2021-08-14T09:36:14.708999492Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd"},"owner":"root"},{"ociVersion":"1.0.2-dev","i
d":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","pid":1601,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20/rootfs","created":"2021-08-14T09:36:37.001965799Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-tbw9g_35667363-ef4b-4333-af82-ae0a5645f03c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","pid":1960,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431/rootfs","created":"2021-08-14T09:36:54.260969809Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","pid":1012,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd/rootfs","created":"2021-08-14T09:36:14.460995721Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20
210814093545-6746_a45cdcfbe723180b68e8cf5ee8920aa4"},"owner":"root"}]
	I0814 09:37:19.589593  164716 cri.go:113] list returned 16 containers
	I0814 09:37:19.589604  164716 cri.go:116] container: {ID:0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 Status:running}
	I0814 09:37:19.589613  164716 cri.go:118] skipping 0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 - not in ps
	I0814 09:37:19.589617  164716 cri.go:116] container: {ID:0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 Status:paused}
	I0814 09:37:19.589624  164716 cri.go:122] skipping {0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 paused}: state = "paused", want "running"
	I0814 09:37:19.589637  164716 cri.go:116] container: {ID:60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 Status:running}
	I0814 09:37:19.589646  164716 cri.go:118] skipping 60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 - not in ps
	I0814 09:37:19.589650  164716 cri.go:116] container: {ID:63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 Status:running}
	I0814 09:37:19.589654  164716 cri.go:118] skipping 63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 - not in ps
	I0814 09:37:19.589663  164716 cri.go:116] container: {ID:66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97 Status:running}
	I0814 09:37:19.589669  164716 cri.go:116] container: {ID:74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 Status:running}
	I0814 09:37:19.589674  164716 cri.go:118] skipping 74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 - not in ps
	I0814 09:37:19.589680  164716 cri.go:116] container: {ID:79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca Status:running}
	I0814 09:37:19.589685  164716 cri.go:118] skipping 79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca - not in ps
	I0814 09:37:19.589693  164716 cri.go:116] container: {ID:7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 Status:running}
	I0814 09:37:19.589697  164716 cri.go:118] skipping 7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 - not in ps
	I0814 09:37:19.589701  164716 cri.go:116] container: {ID:8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed Status:running}
	I0814 09:37:19.589705  164716 cri.go:116] container: {ID:9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38 Status:running}
	I0814 09:37:19.589709  164716 cri.go:116] container: {ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564 Status:running}
	I0814 09:37:19.589715  164716 cri.go:116] container: {ID:ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086 Status:running}
	I0814 09:37:19.589719  164716 cri.go:116] container: {ID:d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119 Status:running}
	I0814 09:37:19.589726  164716 cri.go:116] container: {ID:e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 Status:running}
	I0814 09:37:19.589730  164716 cri.go:118] skipping e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 - not in ps
	I0814 09:37:19.589740  164716 cri.go:116] container: {ID:ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431 Status:running}
	I0814 09:37:19.589746  164716 cri.go:116] container: {ID:faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd Status:running}
	I0814 09:37:19.589750  164716 cri.go:118] skipping faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd - not in ps
	I0814 09:37:19.589787  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97
	I0814 09:37:19.606018  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97 8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed
	I0814 09:37:19.622565  164716 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97 8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:37:19Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:37:20.163262  164716 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:37:20.174077  164716 pause.go:50] kubelet running: false
	I0814 09:37:20.174127  164716 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:37:20.284047  164716 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:37:20.284131  164716 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:37:20.372208  164716 cri.go:76] found id: "a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564"
	I0814 09:37:20.372229  164716 cri.go:76] found id: "ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431"
	I0814 09:37:20.372234  164716 cri.go:76] found id: "9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38"
	I0814 09:37:20.372238  164716 cri.go:76] found id: "66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97"
	I0814 09:37:20.372242  164716 cri.go:76] found id: "0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216"
	I0814 09:37:20.372255  164716 cri.go:76] found id: "8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed"
	I0814 09:37:20.372258  164716 cri.go:76] found id: "d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119"
	I0814 09:37:20.372262  164716 cri.go:76] found id: "ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086"
	I0814 09:37:20.372265  164716 cri.go:76] found id: ""
	I0814 09:37:20.372308  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:37:20.410410  164716 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","pid":2531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22/rootfs","created":"2021-08-14T09:37:18.1369696Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_80eca970-b4ab-4ac8-af20-f814411672fb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216","pid":1158,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d0
a6aac05dea8ef621aa6bb0216","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216/rootfs","created":"2021-08-14T09:36:14.761009913Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","pid":1016,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171/rootfs","created":"2021-08-14T09:36:14.461002641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47
a7f5fc45d171","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210814093545-6746_f11ebb5af93764eea1676b8a16cd11fe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","pid":1593,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7/rootfs","created":"2021-08-14T09:36:36.720976391Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zgc2h_2b76115f-19df-4554-87f1-b88734b7e601"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","pid":1642,"status":"pause
d","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97/rootfs","created":"2021-08-14T09:36:36.901095693Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6/rootfs","created":"2021-08-14T09:36:14.461001587Z","annotations":{"io.kubernetes.cri.container-type":"s
andbox","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210814093545-6746_f38a3f341ed6042c55d7f17229a2a5a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","pid":1928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca/rootfs","created":"2021-08-14T09:36:54.052967447Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-7njgj_5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","pid":1005,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3/rootfs","created":"2021-08-14T09:36:14.436983353Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210814093545-6746_def6f5caa1dfaea021514c05e476f85c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","pid":1157,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","rootfs":"/run/co
ntainerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed/rootfs","created":"2021-08-14T09:36:14.760946456Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","pid":1778,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38/rootfs","created":"2021-08-14T09:36:37.404999146Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20
"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564","pid":2562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564/rootfs","created":"2021-08-14T09:37:18.352994908Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d92
f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086/rootfs","created":"2021-08-14T09:36:14.653021099Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","pid":1116,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119/rootfs","created":"2021-08-14T09:36:14.708999492Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id
":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","pid":1601,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20/rootfs","created":"2021-08-14T09:36:37.001965799Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-tbw9g_35667363-ef4b-4333-af82-ae0a5645f03c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","pid":1960,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e
f9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431/rootfs","created":"2021-08-14T09:36:54.260969809Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","pid":1012,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd/rootfs","created":"2021-08-14T09:36:14.460995721Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-202
10814093545-6746_a45cdcfbe723180b68e8cf5ee8920aa4"},"owner":"root"}]
	I0814 09:37:20.410632  164716 cri.go:113] list returned 16 containers
	I0814 09:37:20.410653  164716 cri.go:116] container: {ID:0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 Status:running}
	I0814 09:37:20.410666  164716 cri.go:118] skipping 0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 - not in ps
	I0814 09:37:20.410673  164716 cri.go:116] container: {ID:0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 Status:paused}
	I0814 09:37:20.410680  164716 cri.go:122] skipping {0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 paused}: state = "paused", want "running"
	I0814 09:37:20.410696  164716 cri.go:116] container: {ID:60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 Status:running}
	I0814 09:37:20.410702  164716 cri.go:118] skipping 60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 - not in ps
	I0814 09:37:20.410707  164716 cri.go:116] container: {ID:63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 Status:running}
	I0814 09:37:20.410713  164716 cri.go:118] skipping 63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 - not in ps
	I0814 09:37:20.410719  164716 cri.go:116] container: {ID:66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97 Status:paused}
	I0814 09:37:20.410730  164716 cri.go:122] skipping {66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97 paused}: state = "paused", want "running"
	I0814 09:37:20.410740  164716 cri.go:116] container: {ID:74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 Status:running}
	I0814 09:37:20.410747  164716 cri.go:118] skipping 74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 - not in ps
	I0814 09:37:20.410755  164716 cri.go:116] container: {ID:79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca Status:running}
	I0814 09:37:20.410762  164716 cri.go:118] skipping 79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca - not in ps
	I0814 09:37:20.410767  164716 cri.go:116] container: {ID:7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 Status:running}
	I0814 09:37:20.410774  164716 cri.go:118] skipping 7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 - not in ps
	I0814 09:37:20.410779  164716 cri.go:116] container: {ID:8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed Status:running}
	I0814 09:37:20.410790  164716 cri.go:116] container: {ID:9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38 Status:running}
	I0814 09:37:20.410801  164716 cri.go:116] container: {ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564 Status:running}
	I0814 09:37:20.410808  164716 cri.go:116] container: {ID:ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086 Status:running}
	I0814 09:37:20.410817  164716 cri.go:116] container: {ID:d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119 Status:running}
	I0814 09:37:20.410823  164716 cri.go:116] container: {ID:e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 Status:running}
	I0814 09:37:20.410833  164716 cri.go:118] skipping e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 - not in ps
	I0814 09:37:20.410838  164716 cri.go:116] container: {ID:ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431 Status:running}
	I0814 09:37:20.410847  164716 cri.go:116] container: {ID:faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd Status:running}
	I0814 09:37:20.410853  164716 cri.go:118] skipping faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd - not in ps
	I0814 09:37:20.410894  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed
	I0814 09:37:20.429893  164716 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed 9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38
	I0814 09:37:20.453583  164716 out.go:177] 
	W0814 09:37:20.453747  164716 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed 9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:37:20Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed 9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:37:20Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0814 09:37:20.453775  164716 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0814 09:37:20.456039  164716 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0814 09:37:20.457425  164716 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210814093545-6746 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210814093545-6746
helpers_test.go:236: (dbg) docker inspect pause-20210814093545-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c",
	        "Created": "2021-08-14T09:35:47.328510764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:35:47.788540698Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hostname",
	        "HostsPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hosts",
	        "LogPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c-json.log",
	        "Name": "/pause-20210814093545-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210814093545-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210814093545-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210814093545-6746",
	                "Source": "/var/lib/docker/volumes/pause-20210814093545-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210814093545-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "name.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ccc2af153ef9d917059bc8c4f07b140ac515f4a831ba1bf6c90b0246a3c1997",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1ccc2af153ef",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210814093545-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "348c3cd444d9"
	                    ],
	                    "NetworkID": "d1c345d3493c76f3a399eb72a44a3805f583371e015cb9c75f513d1b9430742c",
	                    "EndpointID": "7f43385a3cfb69f0364951734129c7173a9f54c3b30297f57443926db80f5d72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746: exit status 2 (17.35418604s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:37:37.863845  165262 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25
E0814 09:37:38.025572    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:37:50.189682    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25: exit status 110 (20.886414306s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | test-preload-20210814092837-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:30:06 UTC | Sat, 14 Aug 2021 09:30:47 UTC |
	|         | test-preload-20210814092837-6746         |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker         |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210814092837-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:30:47 UTC | Sat, 14 Aug 2021 09:30:48 UTC |
	|         | test-preload-20210814092837-6746         |                                          |         |         |                               |                               |
	|         | -- sudo crictl image ls                  |                                          |         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210814092837-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:30:48 UTC | Sat, 14 Aug 2021 09:30:51 UTC |
	|         | test-preload-20210814092837-6746         |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:30:51 UTC | Sat, 14 Aug 2021 09:31:33 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:31:34 UTC | Sat, 14 Aug 2021 09:31:34 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:31:47 UTC | Sat, 14 Aug 2021 09:32:12 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:14 UTC | Sat, 14 Aug 2021 09:32:19 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210814093219-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:26 UTC | Sat, 14 Aug 2021 09:32:32 UTC |
	|         | insufficient-storage-20210814093219-6746 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:33:38 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:38 UTC | Sat, 14 Aug 2021 09:33:59 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:34:08 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:34:08 UTC | Sat, 14 Aug 2021 09:34:11 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:59 UTC | Sat, 14 Aug 2021 09:35:00 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:01 UTC | Sat, 14 Aug 2021 09:35:42 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:35:45 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:36:31 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:31 UTC | Sat, 14 Aug 2021 09:36:34 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20210814093634-6746           | kubenet-20210814093634-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:34 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p flannel-20210814093635-6746           | flannel-20210814093635-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p false-20210814093635-6746             | false-20210814093635-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:36 UTC |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:45 UTC | Sat, 14 Aug 2021 09:36:56 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:56 UTC | Sat, 14 Aug 2021 09:37:18 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:36 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| -p      | force-systemd-flag-20210814093636-6746   | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:28 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:37:28
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:37:28.503077  166020 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:37:28.503146  166020 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:28.503150  166020 out.go:311] Setting ErrFile to fd 2...
	I0814 09:37:28.503153  166020 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:28.503241  166020 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:37:28.503465  166020 out.go:305] Setting JSON to false
	I0814 09:37:28.539408  166020 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4811,"bootTime":1628929038,"procs":252,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:37:28.539505  166020 start.go:121] virtualization: kvm guest
	I0814 09:37:28.541780  166020 out.go:177] * [force-systemd-env-20210814093728-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:37:28.543267  166020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:37:28.541935  166020 notify.go:169] Checking for updates...
	I0814 09:37:28.544650  166020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:37:28.546024  166020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:37:28.547313  166020 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:37:28.548642  166020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0814 09:37:28.549125  166020 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:37:28.549225  166020 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:37:28.549311  166020 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:37:28.549352  166020 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:37:28.596127  166020 docker.go:132] docker version: linux-19.03.15
	I0814 09:37:28.596207  166020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:37:28.675848  166020 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:37:28.6325853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:37:28.675957  166020 docker.go:244] overlay module found
	I0814 09:37:28.677814  166020 out.go:177] * Using the docker driver based on user configuration
	I0814 09:37:28.677846  166020 start.go:278] selected driver: docker
	I0814 09:37:28.677851  166020 start.go:751] validating driver "docker" against <nil>
	I0814 09:37:28.677868  166020 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:37:28.677918  166020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:37:28.677934  166020 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:37:28.679360  166020 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:37:28.680279  166020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:37:28.758365  166020 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:37:28.715860711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:37:28.758453  166020 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:37:28.758615  166020 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:37:28.758635  166020 cni.go:93] Creating CNI manager for ""
	I0814 09:37:28.758640  166020 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:37:28.758646  166020 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:37:28.758653  166020 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:37:28.758658  166020 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:37:28.758668  166020 start_flags.go:277] config:
	{Name:force-systemd-env-20210814093728-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210814093728-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:37:28.760562  166020 out.go:177] * Starting control plane node force-systemd-env-20210814093728-6746 in cluster force-systemd-env-20210814093728-6746
	I0814 09:37:28.760599  166020 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:37:28.761949  166020 out.go:177] * Pulling base image ...
	I0814 09:37:28.761983  166020 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:37:28.762008  166020 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:37:28.762024  166020 cache.go:56] Caching tarball of preloaded images
	I0814 09:37:28.762082  166020 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:37:28.762179  166020 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:37:28.762201  166020 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:37:28.762290  166020 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/config.json ...
	I0814 09:37:28.762312  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/config.json: {Name:mk541068afa495451d2e49abe676ce80b9e24c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:28.834526  166020 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:37:28.834550  166020 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:37:28.834566  166020 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:37:28.834597  166020 start.go:313] acquiring machines lock for force-systemd-env-20210814093728-6746: {Name:mk8a94a7f824f4038042c1dab6f774e3ec47710b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:37:28.834720  166020 start.go:317] acquired machines lock for "force-systemd-env-20210814093728-6746" in 96.103µs
	I0814 09:37:28.834746  166020 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20210814093728-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210814093728-6746 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:37:28.834807  166020 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:37:28.836850  166020 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0814 09:37:28.837102  166020 start.go:160] libmachine.API.Create for "force-systemd-env-20210814093728-6746" (driver="docker")
	I0814 09:37:28.837137  166020 client.go:168] LocalClient.Create starting
	I0814 09:37:28.837225  166020 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:37:28.837297  166020 main.go:130] libmachine: Decoding PEM data...
	I0814 09:37:28.837316  166020 main.go:130] libmachine: Parsing certificate...
	I0814 09:37:28.837435  166020 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:37:28.837456  166020 main.go:130] libmachine: Decoding PEM data...
	I0814 09:37:28.837470  166020 main.go:130] libmachine: Parsing certificate...
	I0814 09:37:28.837857  166020 cli_runner.go:115] Run: docker network inspect force-systemd-env-20210814093728-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:37:28.874520  166020 cli_runner.go:162] docker network inspect force-systemd-env-20210814093728-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:37:28.874611  166020 network_create.go:255] running [docker network inspect force-systemd-env-20210814093728-6746] to gather additional debugging logs...
	I0814 09:37:28.874630  166020 cli_runner.go:115] Run: docker network inspect force-systemd-env-20210814093728-6746
	W0814 09:37:28.910301  166020 cli_runner.go:162] docker network inspect force-systemd-env-20210814093728-6746 returned with exit code 1
	I0814 09:37:28.910330  166020 network_create.go:258] error running [docker network inspect force-systemd-env-20210814093728-6746]: docker network inspect force-systemd-env-20210814093728-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20210814093728-6746
	I0814 09:37:28.910344  166020 network_create.go:260] output of [docker network inspect force-systemd-env-20210814093728-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20210814093728-6746
	
	** /stderr **
	I0814 09:37:28.910402  166020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:37:28.947376  166020 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-d1c345d3493c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:f5:52:80}}
	I0814 09:37:28.948494  166020 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000187330] misses:0}
	I0814 09:37:28.948536  166020 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:37:28.948558  166020 network_create.go:106] attempt to create docker network force-systemd-env-20210814093728-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:37:28.948611  166020 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20210814093728-6746
	I0814 09:37:29.017679  166020 network_create.go:90] docker network force-systemd-env-20210814093728-6746 192.168.58.0/24 created
	I0814 09:37:29.017706  166020 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20210814093728-6746" container
	I0814 09:37:29.017764  166020 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:37:29.057857  166020 cli_runner.go:115] Run: docker volume create force-systemd-env-20210814093728-6746 --label name.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:37:29.097841  166020 oci.go:102] Successfully created a docker volume force-systemd-env-20210814093728-6746
	I0814 09:37:29.097912  166020 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20210814093728-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --entrypoint /usr/bin/test -v force-systemd-env-20210814093728-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:37:29.922536  166020 oci.go:106] Successfully prepared a docker volume force-systemd-env-20210814093728-6746
	W0814 09:37:29.922599  166020 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:37:29.922613  166020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:37:29.922670  166020 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:37:29.922682  166020 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:37:29.922700  166020 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:37:29.922769  166020 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20210814093728-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:37:30.006070  166020 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20210814093728-6746 --name force-systemd-env-20210814093728-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --network force-systemd-env-20210814093728-6746 --ip 192.168.58.2 --volume force-systemd-env-20210814093728-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:37:30.496619  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Running}}
	I0814 09:37:30.540938  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:37:30.586114  166020 cli_runner.go:115] Run: docker exec force-systemd-env-20210814093728-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:37:30.719779  166020 oci.go:278] the created container "force-systemd-env-20210814093728-6746" has a running status.
	I0814 09:37:30.719819  166020 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa...
	I0814 09:37:31.084082  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0814 09:37:31.084123  166020 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:37:31.503370  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:37:31.542574  166020 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:37:31.542599  166020 kic_runner.go:115] Args: [docker exec --privileged force-systemd-env-20210814093728-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a747d02c26253       6e38f40d628db       20 seconds ago       Exited              storage-provisioner       0                   0eed254b3316c
	ef9cd508c4bcf       296a6d5035e2d       44 seconds ago       Running             coredns                   0                   79704c1ba1377
	9753722af7745       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   e9f1ed022aae0
	66b515b3e4a14       adb2816ea823a       About a minute ago   Running             kube-proxy                0                   63ba7b0ef4459
	0fcd2105780a3       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   0                   74d460f2e7a7f
	8bcc07d573eb1       0369cf4303ffd       About a minute ago   Running             etcd                      0                   60a80199b4a57
	d3bf648d26067       6be0dc1302e30       About a minute ago   Running             kube-scheduler            0                   faadff72e3a9c
	ab29adb23277d       3d174f00aa39e       About a minute ago   Running             kube-apiserver            0                   7b9c957209d40
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:37:38 UTC. --
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.541959125Z" level=info msg="Connect containerd service"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542013371Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542632511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542708189Z" level=info msg="Start subscribing containerd event"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542770137Z" level=info msg="Start recovering state"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542857642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542918689Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542967809Z" level=info msg="containerd successfully booted in 0.040983s"
	Aug 14 09:36:58 pause-20210814093545-6746 systemd[1]: Started containerd container runtime.
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625754612Z" level=info msg="Start event monitor"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625793205Z" level=info msg="Start snapshots syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625802136Z" level=info msg="Start cni network conf syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625807599Z" level=info msg="Start streaming server"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.008705018Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.026044573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 pid=2510
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.163827167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.166278582Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.228933904Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.229330725Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.371077991Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\" returns successfully"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449187715Z" level=info msg="Finish piping stderr of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449222888Z" level=info msg="Finish piping stdout of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.450533507Z" level=info msg="TaskExit event &TaskExit{ContainerID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,Pid:2562,ExitStatus:255,ExitedAt:2021-08-14 09:37:32.450264852 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501408095Z" level=info msg="shim disconnected" id=a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501502681Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe d9 f0 5e 28 3d 08 06        .........^(=..
	[ +16.481725] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:26] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:28] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:29] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth38d0eb85
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a bd 7c 39 49 62 08 06        ........|9Ib..
	[Aug14 09:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:32] cgroup: cgroup2: unknown option "nsdelegate"
	[ +13.411048] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.035402] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:33] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.451942] cgroup: cgroup2: unknown option "nsdelegate"
	[ +14.641136] tee (136175): /proc/134359/oom_adj is deprecated, please use /proc/134359/oom_score_adj instead.
	[Aug14 09:34] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.573195] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe29e5784
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 4c 1a e2 69 4b 08 06        .......L..iK..
	[  +8.954711] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:35] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth529d8992
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 22 4f ef 2e 27 f0 08 06        ......"O..'...
	[  +9.430011] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:36] cgroup: cgroup2: unknown option "nsdelegate"
	[ +36.823390] cgroup: cgroup2: unknown option "nsdelegate"
	[ +15.237179] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth43e4fc69
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 7b 35 3d 7d 88 08 06        .......{5=}...
	[Aug14 09:37] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed] <==
	* 2021-08-14 09:36:15.337412 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-14 09:36:15.337510 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-14 09:36:15.338236 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:36:15.338273 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-14 09:36:29.373341 W | etcdserver: request "header:<ID:8128006959566550151 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:70cc7b4404ddf486>" with result "size:42" took too long (722.758009ms) to execute
	2021-08-14 09:36:29.375095 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210814093545-6746\" " with result "range_response_count:1 size:3970" took too long (1.316088805s) to execute
	2021-08-14 09:36:35.339357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:36:40.102148 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:36:50.101987 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:00.102036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:10.102324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:10.796456 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (540.033169ms) to execute
	2021-08-14 09:37:10.796484 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (227.671703ms) to execute
	2021-08-14 09:37:10.796572 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (157.24978ms) to execute
	2021-08-14 09:37:13.568821 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000092497s) to execute
	2021-08-14 09:37:13.954357 W | wal: sync duration of 3.152713958s, expected less than 1s
	2021-08-14 09:37:13.955117 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:37466" took too long (3.15117799s) to execute
	2021-08-14 09:37:15.221449 W | wal: sync duration of 1.25762747s, expected less than 1s
	2021-08-14 09:37:16.767289 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3959" took too long (2.77113478s) to execute
	2021-08-14 09:37:16.767332 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (2.810616713s) to execute
	2021-08-14 09:37:16.767593 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (3.188957483s) to execute
	2021-08-14 09:37:16.767688 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.180831329s) to execute
	2021-08-14 09:37:16.767919 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-pause-20210814093545-6746.169b22b0f0946975\" " with result "range_response_count:1 size:863" took too long (1.180140558s) to execute
	2021-08-14 09:37:16.768090 W | etcdserver: request "header:<ID:8128006959566550730 > lease_revoke:<id:70cc7b4404ddf6a8>" with result "size:29" took too long (854.719058ms) to execute
	2021-08-14 09:37:16.768297 W | etcdserver: read-only range request "key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true " with result "range_response_count:0 size:5" took too long (837.263392ms) to execute
	
	* 
	* ==> kernel <==
	*  09:37:58 up  1:20,  0 users,  load average: 2.68, 2.85, 1.86
	Linux pause-20210814093545-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086] <==
	* I0814 09:37:28.776382       1 client.go:360] parsed scheme: "passthrough"
	I0814 09:37:28.776426       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0814 09:37:28.776435       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0814 09:37:32.428013       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0814 09:37:32.428164       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:37:32.430023       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:37:32.431163       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:37:32.432314       1 trace.go:205] Trace[1633753724]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:37:22.432) (total time: 10000ms):
	Trace[1633753724]: [10.000081888s] [10.000081888s] END
	I0814 09:37:46.025662       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0814 09:37:46.025702       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0814 09:37:48.776569       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context canceled". Reconnecting...
	W0814 09:37:54.879498       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0814 09:37:54.879503       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0814 09:37:57.726602       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	I0814 09:37:58.510412       1 trace.go:205] Trace[1201042002]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:37:38.425) (total time: 20084ms):
	Trace[1201042002]: [20.084834906s] [20.084834906s] END
	I0814 09:37:58.510437       1 trace.go:205] Trace[674493227]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:37:28.019) (total time: 30490ms):
	Trace[674493227]: [30.490984431s] [30.490984431s] END
	E0814 09:37:58.510479       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	E0814 09:37:58.510479       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0814 09:37:58.510816       1 trace.go:205] Trace[1567281289]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:37:38.425) (total time: 20085ms):
	Trace[1567281289]: [20.085280107s] [20.085280107s] END
	I0814 09:37:58.511881       1 trace.go:205] Trace[720619171]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:37:28.019) (total time: 30492ms):
	Trace[720619171]: [30.492446428s] [30.492446428s] END
	
	* 
	* ==> kube-controller-manager [0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216] <==
	* I0814 09:36:35.558636       1 shared_informer.go:247] Caches are synced for cronjob 
	I0814 09:36:35.594056       1 shared_informer.go:247] Caches are synced for disruption 
	I0814 09:36:35.594080       1 disruption.go:371] Sending events to api server.
	I0814 09:36:35.618269       1 shared_informer.go:247] Caches are synced for attach detach 
	I0814 09:36:35.626411       1 shared_informer.go:247] Caches are synced for PV protection 
	I0814 09:36:35.658052       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0814 09:36:35.658104       1 shared_informer.go:247] Caches are synced for expand 
	I0814 09:36:35.666214       1 shared_informer.go:247] Caches are synced for endpoint 
	I0814 09:36:35.666883       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tbw9g"
	I0814 09:36:35.670094       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zgc2h"
	I0814 09:36:35.736985       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0814 09:36:35.758656       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0814 09:36:35.758886       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0814 09:36:35.767757       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.807894       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0814 09:36:35.810106       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.813582       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0814 09:36:36.206921       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.206946       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:36:36.237226       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.328706       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0814 09:36:36.413536       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:36.418045       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7njgj"
	I0814 09:36:36.433569       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:50.510455       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97] <==
	* I0814 09:36:37.040930       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:36:37.040978       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:36:37.041009       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:36:37.135620       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:36:37.135697       1 server_others.go:212] Using iptables Proxier.
	I0814 09:36:37.135734       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:36:37.135764       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:36:37.136453       1 server.go:643] Version: v1.21.3
	I0814 09:36:37.137196       1 config.go:315] Starting service config controller
	I0814 09:36:37.138165       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:36:37.139739       1 config.go:224] Starting endpoint slice config controller
	I0814 09:36:37.139765       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:36:37.141550       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:36:37.142664       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:36:37.240414       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:36:37.240445       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119] <==
	* I0814 09:36:19.239734       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.239780       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.240100       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:36:19.240128       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:36:19.310200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:19.310391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:36:19.310488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.310570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:36:19.310642       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:36:19.310718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:36:19.310788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:19.310860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:36:19.310940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:19.311017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:36:19.312650       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:36:20.163865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.194900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:20.263532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.308670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:20.382950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:20.414192       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0814 09:36:23.340697       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:37:58 UTC. --
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827448    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35667363-ef4b-4333-af82-ae0a5645f03c-xtables-lock\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827473    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b76115f-19df-4554-87f1-b88734b7e601-xtables-lock\") pod \"kube-proxy-zgc2h\" (UID: \"2b76115f-19df-4554-87f1-b88734b7e601\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827529    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtcs\" (UniqueName: \"kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933751    1268 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933781    1268 projected.go:199] Error preparing data for projected volume kube-api-access-99zbk for pod kube-system/kube-proxy-zgc2h: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933854    1268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/2b76115f-19df-4554-87f1-b88734b7e601-kube-api-access-99zbk podName:2b76115f-19df-4554-87f1-b88734b7e601 nodeName:}" failed. No retries permitted until 2021-08-14 09:36:36.43382935 +0000 UTC m=+14.220654883 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-99zbk\" (UniqueName: \"kubernetes.io/projected/2b76115f-19df-4554-87f1-b88734b7e601-kube-api-access-99zbk\") pod \"kube-proxy-zgc2h\" (UID: \"2b76115f-19df-4554-87f1-b88734b7e601\") : configmap \"kube-root-ca.crt\" not found"
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934499    1268 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934531    1268 projected.go:199] Error preparing data for projected volume kube-api-access-kqtcs for pod kube-system/kindnet-tbw9g: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934605    1268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs podName:35667363-ef4b-4333-af82-ae0a5645f03c nodeName:}" failed. No retries permitted until 2021-08-14 09:36:36.434579072 +0000 UTC m=+14.221404600 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-kqtcs\" (UniqueName: \"kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") : configmap \"kube-root-ca.crt\" not found"
	Aug 14 09:36:37 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:37.880472    1268 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.578728    1268 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.761755    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4-config-volume\") pod \"coredns-558bd4d5db-7njgj\" (UID: \"5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4\") "
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.761812    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j45c\" (UniqueName: \"kubernetes.io/projected/5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4-kube-api-access-8j45c\") pod \"coredns-558bd4d5db-7njgj\" (UID: \"5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4\") "
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: W0814 09:36:58.476551    1268 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: W0814 09:36:58.476585    1268 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856585    1268 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856636    1268 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856654    1268 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:36:59 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:59.441448    1268 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.405402    1268 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.604028    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2xz\" (UniqueName: \"kubernetes.io/projected/80eca970-b4ab-4ac8-af20-f814411672fb-kube-api-access-tm2xz\") pod \"storage-provisioner\" (UID: \"80eca970-b4ab-4ac8-af20-f814411672fb\") "
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.604122    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/80eca970-b4ab-4ac8-af20-f814411672fb-tmp\") pod \"storage-provisioner\" (UID: \"80eca970-b4ab-4ac8-af20-f814411672fb\") "
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 154 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00013b210, 0xc000000002)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00013b200)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00052a720, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000140f00, 0x18e5530, 0xc00051d0c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000362440)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000362440, 0x18b3d60, 0xc000708690, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000362440, 0x3b9aca00, 0x0, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000362440, 0x3b9aca00, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:37:58.523719  167709 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210814093545-6746
helpers_test.go:236: (dbg) docker inspect pause-20210814093545-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c",
	        "Created": "2021-08-14T09:35:47.328510764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:35:47.788540698Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hostname",
	        "HostsPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hosts",
	        "LogPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c-json.log",
	        "Name": "/pause-20210814093545-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210814093545-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210814093545-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210814093545-6746",
	                "Source": "/var/lib/docker/volumes/pause-20210814093545-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210814093545-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "name.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ccc2af153ef9d917059bc8c4f07b140ac515f4a831ba1bf6c90b0246a3c1997",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1ccc2af153ef",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210814093545-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "348c3cd444d9"
	                    ],
	                    "NetworkID": "d1c345d3493c76f3a399eb72a44a3805f583371e015cb9c75f513d1b9430742c",
	                    "EndpointID": "7f43385a3cfb69f0364951734129c7173a9f54c3b30297f57443926db80f5d72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746: exit status 2 (15.819793185s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:38:14.627249  169998 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25: exit status 110 (1m0.867808633s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                       | test-preload-20210814092837-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:30:48 UTC | Sat, 14 Aug 2021 09:30:51 UTC |
	|         | test-preload-20210814092837-6746         |                                          |         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:30:51 UTC | Sat, 14 Aug 2021 09:31:33 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	|         | --memory=2048 --driver=docker            |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:31:34 UTC | Sat, 14 Aug 2021 09:31:34 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	|         | --cancel-scheduled                       |                                          |         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:31:47 UTC | Sat, 14 Aug 2021 09:32:12 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	|         | --schedule 5s                            |                                          |         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:14 UTC | Sat, 14 Aug 2021 09:32:19 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210814093219-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:26 UTC | Sat, 14 Aug 2021 09:32:32 UTC |
	|         | insufficient-storage-20210814093219-6746 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:33:38 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:38 UTC | Sat, 14 Aug 2021 09:33:59 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:34:08 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:34:08 UTC | Sat, 14 Aug 2021 09:34:11 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:59 UTC | Sat, 14 Aug 2021 09:35:00 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:01 UTC | Sat, 14 Aug 2021 09:35:42 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:35:45 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:36:31 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:31 UTC | Sat, 14 Aug 2021 09:36:34 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20210814093634-6746           | kubenet-20210814093634-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:34 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p flannel-20210814093635-6746           | flannel-20210814093635-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p false-20210814093635-6746             | false-20210814093635-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:36 UTC |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:45 UTC | Sat, 14 Aug 2021 09:36:56 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:56 UTC | Sat, 14 Aug 2021 09:37:18 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:36 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| -p      | force-systemd-flag-20210814093636-6746   | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:28 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:28 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | force-systemd-env-20210814093728-6746    |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-env-20210814093728-6746    | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:37:28
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:37:28.503077  166020 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:37:28.503146  166020 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:28.503150  166020 out.go:311] Setting ErrFile to fd 2...
	I0814 09:37:28.503153  166020 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:28.503241  166020 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:37:28.503465  166020 out.go:305] Setting JSON to false
	I0814 09:37:28.539408  166020 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4811,"bootTime":1628929038,"procs":252,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:37:28.539505  166020 start.go:121] virtualization: kvm guest
	I0814 09:37:28.541780  166020 out.go:177] * [force-systemd-env-20210814093728-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:37:28.543267  166020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:37:28.541935  166020 notify.go:169] Checking for updates...
	I0814 09:37:28.544650  166020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:37:28.546024  166020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:37:28.547313  166020 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:37:28.548642  166020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0814 09:37:28.549125  166020 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:37:28.549225  166020 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:37:28.549311  166020 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:37:28.549352  166020 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:37:28.596127  166020 docker.go:132] docker version: linux-19.03.15
	I0814 09:37:28.596207  166020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:37:28.675848  166020 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:37:28.6325853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:37:28.675957  166020 docker.go:244] overlay module found
	I0814 09:37:28.677814  166020 out.go:177] * Using the docker driver based on user configuration
	I0814 09:37:28.677846  166020 start.go:278] selected driver: docker
	I0814 09:37:28.677851  166020 start.go:751] validating driver "docker" against <nil>
	I0814 09:37:28.677868  166020 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:37:28.677918  166020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:37:28.677934  166020 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:37:28.679360  166020 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:37:28.680279  166020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:37:28.758365  166020 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:37:28.715860711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:37:28.758453  166020 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:37:28.758615  166020 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:37:28.758635  166020 cni.go:93] Creating CNI manager for ""
	I0814 09:37:28.758640  166020 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:37:28.758646  166020 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:37:28.758653  166020 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:37:28.758658  166020 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:37:28.758668  166020 start_flags.go:277] config:
	{Name:force-systemd-env-20210814093728-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210814093728-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:37:28.760562  166020 out.go:177] * Starting control plane node force-systemd-env-20210814093728-6746 in cluster force-systemd-env-20210814093728-6746
	I0814 09:37:28.760599  166020 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:37:28.761949  166020 out.go:177] * Pulling base image ...
	I0814 09:37:28.761983  166020 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:37:28.762008  166020 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:37:28.762024  166020 cache.go:56] Caching tarball of preloaded images
	I0814 09:37:28.762082  166020 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:37:28.762179  166020 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:37:28.762201  166020 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:37:28.762290  166020 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/config.json ...
	I0814 09:37:28.762312  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/config.json: {Name:mk541068afa495451d2e49abe676ce80b9e24c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:28.834526  166020 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:37:28.834550  166020 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:37:28.834566  166020 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:37:28.834597  166020 start.go:313] acquiring machines lock for force-systemd-env-20210814093728-6746: {Name:mk8a94a7f824f4038042c1dab6f774e3ec47710b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:37:28.834720  166020 start.go:317] acquired machines lock for "force-systemd-env-20210814093728-6746" in 96.103µs
	I0814 09:37:28.834746  166020 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20210814093728-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210814093728-6746 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:37:28.834807  166020 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:37:28.836850  166020 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0814 09:37:28.837102  166020 start.go:160] libmachine.API.Create for "force-systemd-env-20210814093728-6746" (driver="docker")
	I0814 09:37:28.837137  166020 client.go:168] LocalClient.Create starting
	I0814 09:37:28.837225  166020 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:37:28.837297  166020 main.go:130] libmachine: Decoding PEM data...
	I0814 09:37:28.837316  166020 main.go:130] libmachine: Parsing certificate...
	I0814 09:37:28.837435  166020 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:37:28.837456  166020 main.go:130] libmachine: Decoding PEM data...
	I0814 09:37:28.837470  166020 main.go:130] libmachine: Parsing certificate...
	I0814 09:37:28.837857  166020 cli_runner.go:115] Run: docker network inspect force-systemd-env-20210814093728-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:37:28.874520  166020 cli_runner.go:162] docker network inspect force-systemd-env-20210814093728-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:37:28.874611  166020 network_create.go:255] running [docker network inspect force-systemd-env-20210814093728-6746] to gather additional debugging logs...
	I0814 09:37:28.874630  166020 cli_runner.go:115] Run: docker network inspect force-systemd-env-20210814093728-6746
	W0814 09:37:28.910301  166020 cli_runner.go:162] docker network inspect force-systemd-env-20210814093728-6746 returned with exit code 1
	I0814 09:37:28.910330  166020 network_create.go:258] error running [docker network inspect force-systemd-env-20210814093728-6746]: docker network inspect force-systemd-env-20210814093728-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20210814093728-6746
	I0814 09:37:28.910344  166020 network_create.go:260] output of [docker network inspect force-systemd-env-20210814093728-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20210814093728-6746
	
	** /stderr **
	I0814 09:37:28.910402  166020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:37:28.947376  166020 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-d1c345d3493c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:f5:52:80}}
	I0814 09:37:28.948494  166020 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000187330] misses:0}
	I0814 09:37:28.948536  166020 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:37:28.948558  166020 network_create.go:106] attempt to create docker network force-systemd-env-20210814093728-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:37:28.948611  166020 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20210814093728-6746
	I0814 09:37:29.017679  166020 network_create.go:90] docker network force-systemd-env-20210814093728-6746 192.168.58.0/24 created
	I0814 09:37:29.017706  166020 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20210814093728-6746" container
	I0814 09:37:29.017764  166020 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:37:29.057857  166020 cli_runner.go:115] Run: docker volume create force-systemd-env-20210814093728-6746 --label name.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:37:29.097841  166020 oci.go:102] Successfully created a docker volume force-systemd-env-20210814093728-6746
	I0814 09:37:29.097912  166020 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20210814093728-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --entrypoint /usr/bin/test -v force-systemd-env-20210814093728-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:37:29.922536  166020 oci.go:106] Successfully prepared a docker volume force-systemd-env-20210814093728-6746
	W0814 09:37:29.922599  166020 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:37:29.922613  166020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:37:29.922670  166020 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:37:29.922682  166020 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:37:29.922700  166020 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:37:29.922769  166020 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20210814093728-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:37:30.006070  166020 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20210814093728-6746 --name force-systemd-env-20210814093728-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20210814093728-6746 --network force-systemd-env-20210814093728-6746 --ip 192.168.58.2 --volume force-systemd-env-20210814093728-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:37:30.496619  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Running}}
	I0814 09:37:30.540938  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:37:30.586114  166020 cli_runner.go:115] Run: docker exec force-systemd-env-20210814093728-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:37:30.719779  166020 oci.go:278] the created container "force-systemd-env-20210814093728-6746" has a running status.
	I0814 09:37:30.719819  166020 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa...
	I0814 09:37:31.084082  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0814 09:37:31.084123  166020 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:37:31.503370  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:37:31.542574  166020 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:37:31.542599  166020 kic_runner.go:115] Args: [docker exec --privileged force-systemd-env-20210814093728-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:37:34.033392  166020 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20210814093728-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.110035332s)
	I0814 09:37:34.033419  166020 kic.go:188] duration metric: took 4.110718 seconds to extract preloaded images to volume
	I0814 09:37:34.033489  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:37:34.071210  166020 machine.go:88] provisioning docker machine ...
	I0814 09:37:34.071245  166020 ubuntu.go:169] provisioning hostname "force-systemd-env-20210814093728-6746"
	I0814 09:37:34.071301  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:37:34.110620  166020 main.go:130] libmachine: Using SSH client type: native
	I0814 09:37:34.110844  166020 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0814 09:37:34.110869  166020 main.go:130] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-20210814093728-6746 && echo "force-systemd-env-20210814093728-6746" | sudo tee /etc/hostname
	I0814 09:37:34.284396  166020 main.go:130] libmachine: SSH cmd err, output: <nil>: force-systemd-env-20210814093728-6746
	
	I0814 09:37:34.284475  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:37:34.323603  166020 main.go:130] libmachine: Using SSH client type: native
	I0814 09:37:34.323782  166020 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0814 09:37:34.323829  166020 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-20210814093728-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-20210814093728-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-20210814093728-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:37:34.448082  166020 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:37:34.448109  166020 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:37:34.448136  166020 ubuntu.go:177] setting up certificates
	I0814 09:37:34.448147  166020 provision.go:83] configureAuth start
	I0814 09:37:34.448204  166020 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20210814093728-6746
	I0814 09:37:34.486607  166020 provision.go:138] copyHostCerts
	I0814 09:37:34.486643  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:37:34.486674  166020 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:37:34.486692  166020 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:37:34.486767  166020 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:37:34.486856  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:37:34.486883  166020 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:37:34.486893  166020 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:37:34.486923  166020 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:37:34.486994  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:37:34.487018  166020 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:37:34.487027  166020 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:37:34.487055  166020 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:37:34.487108  166020 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-20210814093728-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-20210814093728-6746]
	I0814 09:37:34.610490  166020 provision.go:172] copyRemoteCerts
	I0814 09:37:34.610537  166020 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:37:34.610587  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:37:34.648459  166020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa Username:docker}
	I0814 09:37:34.735276  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 09:37:34.735343  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:37:34.751078  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 09:37:34.751112  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:37:34.766121  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 09:37:34.766153  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0814 09:37:34.781074  166020 provision.go:86] duration metric: configureAuth took 332.917696ms
	I0814 09:37:34.781093  166020 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:37:34.781263  166020 config.go:177] Loaded profile config "force-systemd-env-20210814093728-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:37:34.781277  166020 machine.go:91] provisioned docker machine in 710.046177ms
	I0814 09:37:34.781284  166020 client.go:171] LocalClient.Create took 5.944138018s
	I0814 09:37:34.781301  166020 start.go:168] duration metric: libmachine.API.Create for "force-systemd-env-20210814093728-6746" took 5.944199007s
	I0814 09:37:34.781314  166020 start.go:267] post-start starting for "force-systemd-env-20210814093728-6746" (driver="docker")
	I0814 09:37:34.781324  166020 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:37:34.781371  166020 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:37:34.781413  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:37:34.819915  166020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa Username:docker}
	I0814 09:37:34.911848  166020 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:37:34.914710  166020 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:37:34.914733  166020 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:37:34.914746  166020 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:37:34.914753  166020 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:37:34.914766  166020 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:37:34.914814  166020 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:37:34.914906  166020 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:37:34.914917  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> /etc/ssl/certs/67462.pem
	I0814 09:37:34.915017  166020 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:37:34.921331  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:37:34.937427  166020 start.go:270] post-start completed in 156.099834ms
	I0814 09:37:34.937751  166020 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20210814093728-6746
	I0814 09:37:34.976788  166020 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/config.json ...
	I0814 09:37:34.976979  166020 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:37:34.977015  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:37:35.013511  166020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa Username:docker}
	I0814 09:37:35.100565  166020 start.go:129] duration metric: createHost completed in 6.265747385s
	I0814 09:37:35.100589  166020 start.go:80] releasing machines lock for "force-systemd-env-20210814093728-6746", held for 6.265854408s
	I0814 09:37:35.100661  166020 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20210814093728-6746
	I0814 09:37:35.138758  166020 ssh_runner.go:149] Run: systemctl --version
	I0814 09:37:35.138778  166020 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:37:35.138804  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:37:35.138824  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:37:35.180543  166020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa Username:docker}
	I0814 09:37:35.182291  166020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa Username:docker}
	I0814 09:37:35.268663  166020 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:37:35.292202  166020 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:37:35.300551  166020 docker.go:153] disabling docker service ...
	I0814 09:37:35.300599  166020 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:37:35.315444  166020 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:37:35.323342  166020 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:37:35.383999  166020 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:37:35.437923  166020 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:37:35.446487  166020 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:37:35.457734  166020 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSB0cnVlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:37:35.469298  166020 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:37:35.475016  166020 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:37:35.475062  166020 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:37:35.481280  166020 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:37:35.486791  166020 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:37:35.540738  166020 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:37:35.602656  166020 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:37:35.602716  166020 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:37:35.606042  166020 start.go:413] Will wait 60s for crictl version
	I0814 09:37:35.606092  166020 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:37:35.628132  166020 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:37:35Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:37:46.677943  166020 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:37:46.712050  166020 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:37:46.712117  166020 ssh_runner.go:149] Run: containerd --version
	I0814 09:37:46.732666  166020 ssh_runner.go:149] Run: containerd --version
	I0814 09:37:46.755414  166020 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:37:46.755475  166020 cli_runner.go:115] Run: docker network inspect force-systemd-env-20210814093728-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:37:46.791622  166020 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:37:46.794680  166020 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:37:46.803489  166020 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:37:46.803554  166020 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:37:46.824877  166020 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:37:46.824897  166020 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:37:46.824932  166020 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:37:46.845157  166020 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:37:46.845174  166020 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:37:46.845214  166020 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:37:46.865131  166020 cni.go:93] Creating CNI manager for ""
	I0814 09:37:46.865151  166020 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:37:46.865160  166020 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:37:46.865171  166020 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-20210814093728-6746 NodeName:force-systemd-env-20210814093728-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:37:46.865284  166020 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "force-systemd-env-20210814093728-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:37:46.865358  166020 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=force-systemd-env-20210814093728-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210814093728-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:37:46.865399  166020 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:37:46.871761  166020 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:37:46.871808  166020 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:37:46.877782  166020 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
	I0814 09:37:46.888904  166020 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:37:46.900254  166020 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I0814 09:37:46.911161  166020 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:37:46.913711  166020 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:37:46.921924  166020 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746 for IP: 192.168.58.2
	I0814 09:37:46.921961  166020 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:37:46.921973  166020 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:37:46.922015  166020 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.key
	I0814 09:37:46.922023  166020 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.crt with IP's: []
	I0814 09:37:47.061357  166020 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.crt ...
	I0814 09:37:47.061381  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.crt: {Name:mk279f5fd786345ef243a98dd3c6e9a382423527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:47.061539  166020 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.key ...
	I0814 09:37:47.061555  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.key: {Name:mk4c05d7c54ced8fd23d732a2e7c6d02a57feffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:47.061647  166020 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.key.cee25041
	I0814 09:37:47.061657  166020 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:37:47.148038  166020 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.crt.cee25041 ...
	I0814 09:37:47.148065  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.crt.cee25041: {Name:mk30b59a0af6578a148ca35698fc80a79f3bd60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:47.148203  166020 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.key.cee25041 ...
	I0814 09:37:47.148215  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.key.cee25041: {Name:mk61db108e89aa9eed1cfbec2d38ea9a190230e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:47.148281  166020 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.crt
	I0814 09:37:47.148335  166020 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.key
	I0814 09:37:47.148386  166020 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.key
	I0814 09:37:47.148394  166020 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.crt with IP's: []
	I0814 09:37:47.210850  166020 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.crt ...
	I0814 09:37:47.210873  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.crt: {Name:mk50aa5a49af56f7f4e434f1b9529a83f577110a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:47.211023  166020 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.key ...
	I0814 09:37:47.211037  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.key: {Name:mk608c7a7f66d540937c6c005d5f00450c498d36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:37:47.211125  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 09:37:47.211143  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 09:37:47.211155  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 09:37:47.211168  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 09:37:47.211182  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 09:37:47.211198  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 09:37:47.211213  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 09:37:47.211227  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 09:37:47.211275  166020 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:37:47.211313  166020 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:37:47.211329  166020 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:37:47.211354  166020 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:37:47.211376  166020 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:37:47.211398  166020 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:37:47.211438  166020 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:37:47.211469  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:37:47.211484  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem -> /usr/share/ca-certificates/6746.pem
	I0814 09:37:47.211497  166020 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> /usr/share/ca-certificates/67462.pem
	I0814 09:37:47.212363  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:37:47.301779  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 09:37:47.318176  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:37:47.333560  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:37:47.348955  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:37:47.363870  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:37:47.378950  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:37:47.394184  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:37:47.409172  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:37:47.424456  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:37:47.439548  166020 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:37:47.455599  166020 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:37:47.466722  166020 ssh_runner.go:149] Run: openssl version
	I0814 09:37:47.470963  166020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:37:47.477294  166020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:37:47.479981  166020 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:37:47.480023  166020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:37:47.484364  166020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:37:47.490736  166020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:37:47.497147  166020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:37:47.499788  166020 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:37:47.499833  166020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:37:47.504237  166020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:37:47.510693  166020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:37:47.517086  166020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:37:47.519738  166020 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:37:47.519785  166020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:37:47.524010  166020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:37:47.530486  166020 kubeadm.go:390] StartCluster: {Name:force-systemd-env-20210814093728-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:force-systemd-env-20210814093728-6746 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:37:47.530553  166020 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:37:47.530585  166020 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:37:47.552174  166020 cri.go:76] found id: ""
	I0814 09:37:47.552223  166020 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:37:47.558353  166020 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:37:47.564298  166020 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:37:47.564343  166020 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:37:47.570256  166020 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:37:47.570286  166020 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:37:47.827372  166020 out.go:204]   - Generating certificates and keys ...
	I0814 09:37:50.142913  166020 out.go:204]   - Booting up control plane ...
	I0814 09:38:10.188543  166020 out.go:204]   - Configuring RBAC rules ...
	I0814 09:38:10.600153  166020 cni.go:93] Creating CNI manager for ""
	I0814 09:38:10.600173  166020 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:38:10.601843  166020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:38:10.601901  166020 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:38:10.605670  166020 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:38:10.605695  166020 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:38:10.619869  166020 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:38:10.982029  166020 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:38:10.982095  166020 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:38:10.982128  166020 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=force-systemd-env-20210814093728-6746 minikube.k8s.io/updated_at=2021_08_14T09_38_10_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:38:11.048321  166020 ops.go:34] apiserver oom_adj: -16
	I0814 09:38:11.048392  166020 kubeadm.go:985] duration metric: took 66.358668ms to wait for elevateKubeSystemPrivileges.
	I0814 09:38:11.059340  166020 kubeadm.go:392] StartCluster complete in 23.528852004s
	I0814 09:38:11.059381  166020 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:38:11.059469  166020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:38:11.060648  166020 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:38:11.061430  166020 kapi.go:59] client config for force-systemd-env-20210814093728-6746: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles
/force-systemd-env-20210814093728-6746/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:38:11.061826  166020 cert_rotation.go:137] Starting client certificate rotation controller
	I0814 09:38:11.575730  166020 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "force-systemd-env-20210814093728-6746" rescaled to 1
	I0814 09:38:11.575790  166020 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:38:11.578125  166020 out.go:177] * Verifying Kubernetes components...
	I0814 09:38:11.575833  166020 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:38:11.575850  166020 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0814 09:38:11.578318  166020 addons.go:59] Setting storage-provisioner=true in profile "force-systemd-env-20210814093728-6746"
	I0814 09:38:11.578339  166020 addons.go:135] Setting addon storage-provisioner=true in "force-systemd-env-20210814093728-6746"
	W0814 09:38:11.578347  166020 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:38:11.576051  166020 config.go:177] Loaded profile config "force-systemd-env-20210814093728-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:38:11.578376  166020 addons.go:59] Setting default-storageclass=true in profile "force-systemd-env-20210814093728-6746"
	I0814 09:38:11.578380  166020 host.go:66] Checking if "force-systemd-env-20210814093728-6746" exists ...
	I0814 09:38:11.578190  166020 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:38:11.578397  166020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-20210814093728-6746"
	I0814 09:38:11.578756  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:38:11.578934  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:38:11.627600  166020 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:38:11.627420  166020 kapi.go:59] client config for force-systemd-env-20210814093728-6746: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles
/force-systemd-env-20210814093728-6746/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:38:11.627694  166020 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:38:11.627706  166020 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:38:11.627758  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:38:11.632243  166020 addons.go:135] Setting addon default-storageclass=true in "force-systemd-env-20210814093728-6746"
	W0814 09:38:11.632264  166020 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:38:11.632287  166020 host.go:66] Checking if "force-systemd-env-20210814093728-6746" exists ...
	I0814 09:38:11.632620  166020 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210814093728-6746 --format={{.State.Status}}
	I0814 09:38:11.663202  166020 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:38:11.663874  166020 kapi.go:59] client config for force-systemd-env-20210814093728-6746: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/force-systemd-env-20210814093728-6746/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles
/force-systemd-env-20210814093728-6746/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:38:11.665264  166020 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:38:11.665310  166020 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:38:11.679694  166020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa Username:docker}
	I0814 09:38:11.687796  166020 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:38:11.687822  166020 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:38:11.687876  166020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210814093728-6746
	I0814 09:38:11.735431  166020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/force-systemd-env-20210814093728-6746/id_rsa Username:docker}
	I0814 09:38:11.814991  166020 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:38:11.866958  166020 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:38:12.016971  166020 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0814 09:38:12.016988  166020 api_server.go:70] duration metric: took 441.165751ms to wait for apiserver process to appear ...
	I0814 09:38:12.017008  166020 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:38:12.017019  166020 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:38:12.022629  166020 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:38:12.023490  166020 api_server.go:139] control plane version: v1.21.3
	I0814 09:38:12.023508  166020 api_server.go:129] duration metric: took 6.494409ms to wait for apiserver health ...
	I0814 09:38:12.023516  166020 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:38:12.032995  166020 system_pods.go:59] 4 kube-system pods found
	I0814 09:38:12.033019  166020 system_pods.go:61] "etcd-force-systemd-env-20210814093728-6746" [303fba20-eb7b-4cfd-9162-5241b06037bc] Running
	I0814 09:38:12.033024  166020 system_pods.go:61] "kube-apiserver-force-systemd-env-20210814093728-6746" [47004516-35e6-41c3-8a1b-32f78e221033] Running
	I0814 09:38:12.033028  166020 system_pods.go:61] "kube-controller-manager-force-systemd-env-20210814093728-6746" [f6e36c5c-1d69-41b2-81bc-b0f358049de6] Pending
	I0814 09:38:12.033037  166020 system_pods.go:61] "kube-scheduler-force-systemd-env-20210814093728-6746" [d5f5c354-d34d-4d0e-bdc9-d9bf59e5ff97] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:38:12.033043  166020 system_pods.go:74] duration metric: took 9.523276ms to wait for pod list to return data ...
	I0814 09:38:12.033052  166020 kubeadm.go:547] duration metric: took 457.232708ms to wait for : map[apiserver:true system_pods:true] ...
	I0814 09:38:12.033067  166020 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:38:12.036218  166020 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:38:12.036240  166020 node_conditions.go:123] node cpu capacity is 8
	I0814 09:38:12.036252  166020 node_conditions.go:105] duration metric: took 3.176428ms to run NodePressure ...
	I0814 09:38:12.036262  166020 start.go:231] waiting for startup goroutines ...
	I0814 09:38:12.217959  166020 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0814 09:38:12.217986  166020 addons.go:344] enableAddons completed in 642.138581ms
	I0814 09:38:12.272934  166020 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:38:12.275224  166020 out.go:177] * Done! kubectl is now configured to use "force-systemd-env-20210814093728-6746" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a747d02c26253       6e38f40d628db       56 seconds ago       Exited              storage-provisioner       0                   0eed254b3316c
	ef9cd508c4bcf       296a6d5035e2d       About a minute ago   Running             coredns                   0                   79704c1ba1377
	9753722af7745       6de166512aa22       About a minute ago   Running             kindnet-cni               0                   e9f1ed022aae0
	66b515b3e4a14       adb2816ea823a       About a minute ago   Running             kube-proxy                0                   63ba7b0ef4459
	0fcd2105780a3       bc2bb319a7038       2 minutes ago        Running             kube-controller-manager   0                   74d460f2e7a7f
	8bcc07d573eb1       0369cf4303ffd       2 minutes ago        Running             etcd                      0                   60a80199b4a57
	d3bf648d26067       6be0dc1302e30       2 minutes ago        Running             kube-scheduler            0                   faadff72e3a9c
	ab29adb23277d       3d174f00aa39e       2 minutes ago        Running             kube-apiserver            0                   7b9c957209d40
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:38:15 UTC. --
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.541959125Z" level=info msg="Connect containerd service"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542013371Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542632511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542708189Z" level=info msg="Start subscribing containerd event"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542770137Z" level=info msg="Start recovering state"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542857642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542918689Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542967809Z" level=info msg="containerd successfully booted in 0.040983s"
	Aug 14 09:36:58 pause-20210814093545-6746 systemd[1]: Started containerd container runtime.
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625754612Z" level=info msg="Start event monitor"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625793205Z" level=info msg="Start snapshots syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625802136Z" level=info msg="Start cni network conf syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625807599Z" level=info msg="Start streaming server"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.008705018Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.026044573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 pid=2510
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.163827167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.166278582Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.228933904Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.229330725Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.371077991Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\" returns successfully"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449187715Z" level=info msg="Finish piping stderr of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449222888Z" level=info msg="Finish piping stdout of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.450533507Z" level=info msg="TaskExit event &TaskExit{ContainerID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,Pid:2562,ExitStatus:255,ExitedAt:2021-08-14 09:37:32.450264852 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501408095Z" level=info msg="shim disconnected" id=a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501502681Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug14 09:26] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:28] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:29] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth38d0eb85
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a bd 7c 39 49 62 08 06        ........|9Ib..
	[Aug14 09:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:32] cgroup: cgroup2: unknown option "nsdelegate"
	[ +13.411048] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.035402] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:33] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.451942] cgroup: cgroup2: unknown option "nsdelegate"
	[ +14.641136] tee (136175): /proc/134359/oom_adj is deprecated, please use /proc/134359/oom_score_adj instead.
	[Aug14 09:34] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.573195] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe29e5784
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 4c 1a e2 69 4b 08 06        .......L..iK..
	[  +8.954711] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:35] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth529d8992
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 22 4f ef 2e 27 f0 08 06        ......"O..'...
	[  +9.430011] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:36] cgroup: cgroup2: unknown option "nsdelegate"
	[ +36.823390] cgroup: cgroup2: unknown option "nsdelegate"
	[ +15.237179] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth43e4fc69
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 7b 35 3d 7d 88 08 06        .......{5=}...
	[Aug14 09:37] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:38] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:39] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed] <==
	* 2021-08-14 09:36:15.337412 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-14 09:36:15.337510 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-14 09:36:15.338236 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:36:15.338273 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-14 09:36:29.373341 W | etcdserver: request "header:<ID:8128006959566550151 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:70cc7b4404ddf486>" with result "size:42" took too long (722.758009ms) to execute
	2021-08-14 09:36:29.375095 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210814093545-6746\" " with result "range_response_count:1 size:3970" took too long (1.316088805s) to execute
	2021-08-14 09:36:35.339357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:36:40.102148 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:36:50.101987 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:00.102036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:10.102324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:10.796456 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (540.033169ms) to execute
	2021-08-14 09:37:10.796484 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (227.671703ms) to execute
	2021-08-14 09:37:10.796572 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (157.24978ms) to execute
	2021-08-14 09:37:13.568821 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000092497s) to execute
	2021-08-14 09:37:13.954357 W | wal: sync duration of 3.152713958s, expected less than 1s
	2021-08-14 09:37:13.955117 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:37466" took too long (3.15117799s) to execute
	2021-08-14 09:37:15.221449 W | wal: sync duration of 1.25762747s, expected less than 1s
	2021-08-14 09:37:16.767289 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3959" took too long (2.77113478s) to execute
	2021-08-14 09:37:16.767332 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (2.810616713s) to execute
	2021-08-14 09:37:16.767593 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (3.188957483s) to execute
	2021-08-14 09:37:16.767688 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.180831329s) to execute
	2021-08-14 09:37:16.767919 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-pause-20210814093545-6746.169b22b0f0946975\" " with result "range_response_count:1 size:863" took too long (1.180140558s) to execute
	2021-08-14 09:37:16.768090 W | etcdserver: request "header:<ID:8128006959566550730 > lease_revoke:<id:70cc7b4404ddf6a8>" with result "size:29" took too long (854.719058ms) to execute
	2021-08-14 09:37:16.768297 W | etcdserver: read-only range request "key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true " with result "range_response_count:0 size:5" took too long (837.263392ms) to execute
	
	* 
	* ==> kernel <==
	*  09:39:15 up  1:21,  0 users,  load average: 2.00, 2.60, 1.85
	Linux pause-20210814093545-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086] <==
	* W0814 09:39:02.756241       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:02.933862       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:02.991475       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:03.009839       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:03.182310       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:05.852393       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	E0814 09:39:09.357469       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:39:09.357629       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:39:09.358568       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:39:09.359778       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:39:09.361482       1 trace.go:205] Trace[1722952748]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:38:09.357) (total time: 60003ms):
	Trace[1722952748]: [1m0.003628734s] [1m0.003628734s] END
	W0814 09:39:11.446278       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:11.548027       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:11.584835       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:11.682741       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:39:12.599650       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0814 09:39:15.264237       1 trace.go:205] Trace[1855622950]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:38:15.264) (total time: 59999ms):
	Trace[1855622950]: [59.999710949s] [59.999710949s] END
	E0814 09:39:15.264279       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:39:15.264336       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:39:15.265497       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:39:15.266672       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:39:15.268282       1 trace.go:205] Trace[620807824]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:38:15.264) (total time: 60003ms):
	Trace[620807824]: [1m0.003773336s] [1m0.003773336s] END
	
	* 
	* ==> kube-controller-manager [0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216] <==
	* I0814 09:36:35.558636       1 shared_informer.go:247] Caches are synced for cronjob 
	I0814 09:36:35.594056       1 shared_informer.go:247] Caches are synced for disruption 
	I0814 09:36:35.594080       1 disruption.go:371] Sending events to api server.
	I0814 09:36:35.618269       1 shared_informer.go:247] Caches are synced for attach detach 
	I0814 09:36:35.626411       1 shared_informer.go:247] Caches are synced for PV protection 
	I0814 09:36:35.658052       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0814 09:36:35.658104       1 shared_informer.go:247] Caches are synced for expand 
	I0814 09:36:35.666214       1 shared_informer.go:247] Caches are synced for endpoint 
	I0814 09:36:35.666883       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tbw9g"
	I0814 09:36:35.670094       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zgc2h"
	I0814 09:36:35.736985       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0814 09:36:35.758656       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0814 09:36:35.758886       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0814 09:36:35.767757       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.807894       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0814 09:36:35.810106       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.813582       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0814 09:36:36.206921       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.206946       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:36:36.237226       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.328706       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0814 09:36:36.413536       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:36.418045       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7njgj"
	I0814 09:36:36.433569       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:50.510455       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97] <==
	* I0814 09:36:37.040930       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:36:37.040978       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:36:37.041009       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:36:37.135620       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:36:37.135697       1 server_others.go:212] Using iptables Proxier.
	I0814 09:36:37.135734       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:36:37.135764       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:36:37.136453       1 server.go:643] Version: v1.21.3
	I0814 09:36:37.137196       1 config.go:315] Starting service config controller
	I0814 09:36:37.138165       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:36:37.139739       1 config.go:224] Starting endpoint slice config controller
	I0814 09:36:37.139765       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:36:37.141550       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:36:37.142664       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:36:37.240414       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:36:37.240445       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119] <==
	* I0814 09:36:19.239734       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.239780       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.240100       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:36:19.240128       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:36:19.310200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:19.310391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:36:19.310488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.310570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:36:19.310642       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:36:19.310718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:36:19.310788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:19.310860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:36:19.310940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:19.311017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:36:19.312650       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:36:20.163865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.194900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:20.263532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.308670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:20.382950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:20.414192       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0814 09:36:23.340697       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:39:15 UTC. --
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827448    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35667363-ef4b-4333-af82-ae0a5645f03c-xtables-lock\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827473    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b76115f-19df-4554-87f1-b88734b7e601-xtables-lock\") pod \"kube-proxy-zgc2h\" (UID: \"2b76115f-19df-4554-87f1-b88734b7e601\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827529    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtcs\" (UniqueName: \"kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933751    1268 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933781    1268 projected.go:199] Error preparing data for projected volume kube-api-access-99zbk for pod kube-system/kube-proxy-zgc2h: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933854    1268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/2b76115f-19df-4554-87f1-b88734b7e601-kube-api-access-99zbk podName:2b76115f-19df-4554-87f1-b88734b7e601 nodeName:}" failed. No retries permitted until 2021-08-14 09:36:36.43382935 +0000 UTC m=+14.220654883 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-99zbk\" (UniqueName: \"kubernetes.io/projected/2b76115f-19df-4554-87f1-b88734b7e601-kube-api-access-99zbk\") pod \"kube-proxy-zgc2h\" (UID: \"2b76115f-19df-4554-87f1-b88734b7e601\") : configmap \"kube-root-ca.crt\" not found"
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934499    1268 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934531    1268 projected.go:199] Error preparing data for projected volume kube-api-access-kqtcs for pod kube-system/kindnet-tbw9g: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934605    1268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs podName:35667363-ef4b-4333-af82-ae0a5645f03c nodeName:}" failed. No retries permitted until 2021-08-14 09:36:36.434579072 +0000 UTC m=+14.221404600 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-kqtcs\" (UniqueName: \"kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") : configmap \"kube-root-ca.crt\" not found"
	Aug 14 09:36:37 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:37.880472    1268 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.578728    1268 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.761755    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4-config-volume\") pod \"coredns-558bd4d5db-7njgj\" (UID: \"5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4\") "
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.761812    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j45c\" (UniqueName: \"kubernetes.io/projected/5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4-kube-api-access-8j45c\") pod \"coredns-558bd4d5db-7njgj\" (UID: \"5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4\") "
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: W0814 09:36:58.476551    1268 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: W0814 09:36:58.476585    1268 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856585    1268 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856636    1268 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856654    1268 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:36:59 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:59.441448    1268 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.405402    1268 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.604028    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2xz\" (UniqueName: \"kubernetes.io/projected/80eca970-b4ab-4ac8-af20-f814411672fb-kube-api-access-tm2xz\") pod \"storage-provisioner\" (UID: \"80eca970-b4ab-4ac8-af20-f814411672fb\") "
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.604122    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/80eca970-b4ab-4ac8-af20-f814411672fb-tmp\") pod \"storage-provisioner\" (UID: \"80eca970-b4ab-4ac8-af20-f814411672fb\") "
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 154 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00013b210, 0xc000000002)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00013b200)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00052a720, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000140f00, 0x18e5530, 0xc00051d0c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000362440)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000362440, 0x18b3d60, 0xc000708690, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000362440, 0x3b9aca00, 0x0, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000362440, 0x3b9aca00, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:39:15.268244  170831 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/Pause (116.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (92.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210814093545-6746 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210814093545-6746 --output=json --layout=cluster: exit status 2 (17.31212468s)

                                                
                                                
-- stdout --
	{"Name":"pause-20210814093545-6746","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210814093545-6746 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210814093545-6746","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":500,"StatusName":"Error"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:39:32.830885  176749 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	
	E0814 09:39:32.831205  176749 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0814 09:39:32.831230  176749 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0814 09:39:32.831245  176749 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210814093545-6746
helpers_test.go:236: (dbg) docker inspect pause-20210814093545-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c",
	        "Created": "2021-08-14T09:35:47.328510764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:35:47.788540698Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hostname",
	        "HostsPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hosts",
	        "LogPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c-json.log",
	        "Name": "/pause-20210814093545-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210814093545-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210814093545-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210814093545-6746",
	                "Source": "/var/lib/docker/volumes/pause-20210814093545-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210814093545-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "name.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ccc2af153ef9d917059bc8c4f07b140ac515f4a831ba1bf6c90b0246a3c1997",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1ccc2af153ef",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210814093545-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "348c3cd444d9"
	                    ],
	                    "NetworkID": "d1c345d3493c76f3a399eb72a44a3805f583371e015cb9c75f513d1b9430742c",
	                    "EndpointID": "7f43385a3cfb69f0364951734129c7173a9f54c3b30297f57443926db80f5d72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746: exit status 2 (14.503046487s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:39:47.375755  177900 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25: exit status 110 (1m0.802461381s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                       | scheduled-stop-20210814093051-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:14 UTC | Sat, 14 Aug 2021 09:32:19 UTC |
	|         | scheduled-stop-20210814093051-6746       |                                          |         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210814093219-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:26 UTC | Sat, 14 Aug 2021 09:32:32 UTC |
	|         | insufficient-storage-20210814093219-6746 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:33:38 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:38 UTC | Sat, 14 Aug 2021 09:33:59 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:34:08 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:34:08 UTC | Sat, 14 Aug 2021 09:34:11 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:59 UTC | Sat, 14 Aug 2021 09:35:00 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:01 UTC | Sat, 14 Aug 2021 09:35:42 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:35:45 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:36:31 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:31 UTC | Sat, 14 Aug 2021 09:36:34 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20210814093634-6746           | kubenet-20210814093634-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:34 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p flannel-20210814093635-6746           | flannel-20210814093635-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p false-20210814093635-6746             | false-20210814093635-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:36 UTC |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:45 UTC | Sat, 14 Aug 2021 09:36:56 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:56 UTC | Sat, 14 Aug 2021 09:37:18 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:36 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| -p      | force-systemd-flag-20210814093636-6746   | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:28 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:28 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | force-systemd-env-20210814093728-6746    |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-env-20210814093728-6746    | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:15 UTC |
	|         | force-systemd-env-20210814093728-6746    |                                          |         |         |                               |                               |
	| start   | -p                                       | cert-options-20210814093815-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:15 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | cert-options-20210814093815-6746         |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                |                                          |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15            |                                          |         |         |                               |                               |
	|         | --apiserver-names=localhost              |                                          |         |         |                               |                               |
	|         | --apiserver-names=www.google.com         |                                          |         |         |                               |                               |
	|         | --apiserver-port=8555                    |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | cert-options-20210814093815-6746         | cert-options-20210814093815-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | ssh openssl x509 -text -noout -in        |                                          |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt    |                                          |         |         |                               |                               |
	| delete  | -p                                       | cert-options-20210814093815-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:39:02 UTC |
	|         | cert-options-20210814093815-6746         |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:39:02
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:39:02.663117  174943 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:39:02.663193  174943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:39:02.663215  174943 out.go:311] Setting ErrFile to fd 2...
	I0814 09:39:02.663219  174943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:39:02.663297  174943 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:39:02.663528  174943 out.go:305] Setting JSON to false
	I0814 09:39:02.698199  174943 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4905,"bootTime":1628929038,"procs":253,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:39:02.698265  174943 start.go:121] virtualization: kvm guest
	I0814 09:39:02.701044  174943 out.go:177] * [old-k8s-version-20210814093902-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:39:02.702691  174943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:39:02.701175  174943 notify.go:169] Checking for updates...
	I0814 09:39:02.704326  174943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:39:02.705770  174943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:39:02.707156  174943 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:39:02.707588  174943 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:39:02.707661  174943 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:39:02.707716  174943 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:39:02.707743  174943 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:39:02.757772  174943 docker.go:132] docker version: linux-19.03.15
	I0814 09:39:02.757846  174943 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:39:02.834009  174943 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:39:02.792005104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:39:02.834125  174943 docker.go:244] overlay module found
	I0814 09:39:02.836097  174943 out.go:177] * Using the docker driver based on user configuration
	I0814 09:39:02.836120  174943 start.go:278] selected driver: docker
	I0814 09:39:02.836125  174943 start.go:751] validating driver "docker" against <nil>
	I0814 09:39:02.836141  174943 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:39:02.836197  174943 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:39:02.836214  174943 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:39:02.837730  174943 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:39:02.838481  174943 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:39:02.915948  174943 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:39:02.872598918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:39:02.916078  174943 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:39:02.916214  174943 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:39:02.916233  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:02.916238  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:02.916244  174943 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:39:02.916251  174943 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:39:02.916255  174943 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:39:02.916263  174943 start_flags.go:277] config:
	{Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:39:02.918359  174943 out.go:177] * Starting control plane node old-k8s-version-20210814093902-6746 in cluster old-k8s-version-20210814093902-6746
	I0814 09:39:02.918400  174943 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:39:02.919787  174943 out.go:177] * Pulling base image ...
	I0814 09:39:02.919820  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:02.919847  174943 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:39:02.919872  174943 cache.go:56] Caching tarball of preloaded images
	I0814 09:39:02.919916  174943 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:39:02.920015  174943 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:39:02.920031  174943 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on containerd
	I0814 09:39:02.920134  174943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json ...
	I0814 09:39:02.920153  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json: {Name:mk04a532e2ac4420a6fb8880a4de59459858f2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:02.993067  174943 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:39:02.993094  174943 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:39:02.993112  174943 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:39:02.993157  174943 start.go:313] acquiring machines lock for old-k8s-version-20210814093902-6746: {Name:mk8e2fe5e854673f5d1990fabb56ddd331c139dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:39:02.993284  174943 start.go:317] acquired machines lock for "old-k8s-version-20210814093902-6746" in 106.703µs
	I0814 09:39:02.993313  174943 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0814 09:39:02.993400  174943 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:39:02.995534  174943 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0814 09:39:02.995772  174943 start.go:160] libmachine.API.Create for "old-k8s-version-20210814093902-6746" (driver="docker")
	I0814 09:39:02.995827  174943 client.go:168] LocalClient.Create starting
	I0814 09:39:02.995889  174943 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:39:02.995931  174943 main.go:130] libmachine: Decoding PEM data...
	I0814 09:39:02.995957  174943 main.go:130] libmachine: Parsing certificate...
	I0814 09:39:02.996078  174943 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:39:02.996106  174943 main.go:130] libmachine: Decoding PEM data...
	I0814 09:39:02.996128  174943 main.go:130] libmachine: Parsing certificate...
	I0814 09:39:02.996510  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:39:03.033171  174943 cli_runner.go:162] docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:39:03.033249  174943 network_create.go:255] running [docker network inspect old-k8s-version-20210814093902-6746] to gather additional debugging logs...
	I0814 09:39:03.033270  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746
	W0814 09:39:03.069356  174943 cli_runner.go:162] docker network inspect old-k8s-version-20210814093902-6746 returned with exit code 1
	I0814 09:39:03.069384  174943 network_create.go:258] error running [docker network inspect old-k8s-version-20210814093902-6746]: docker network inspect old-k8s-version-20210814093902-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20210814093902-6746
	I0814 09:39:03.069405  174943 network_create.go:260] output of [docker network inspect old-k8s-version-20210814093902-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20210814093902-6746
	
	** /stderr **
	I0814 09:39:03.069444  174943 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:39:03.106608  174943 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-d1c345d3493c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:f5:52:80}}
	I0814 09:39:03.107937  174943 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000a10cb0] misses:0}
	I0814 09:39:03.107986  174943 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:39:03.108002  174943 network_create.go:106] attempt to create docker network old-k8s-version-20210814093902-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:39:03.108057  174943 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210814093902-6746
	I0814 09:39:03.177694  174943 network_create.go:90] docker network old-k8s-version-20210814093902-6746 192.168.58.0/24 created
	I0814 09:39:03.177723  174943 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20210814093902-6746" container
	I0814 09:39:03.177793  174943 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:39:03.215811  174943 cli_runner.go:115] Run: docker volume create old-k8s-version-20210814093902-6746 --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:39:03.253142  174943 oci.go:102] Successfully created a docker volume old-k8s-version-20210814093902-6746
	I0814 09:39:03.253222  174943 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210814093902-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --entrypoint /usr/bin/test -v old-k8s-version-20210814093902-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:39:04.056508  174943 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210814093902-6746
	W0814 09:39:04.056559  174943 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:39:04.056594  174943 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:39:04.056630  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:04.056654  174943 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:39:04.056655  174943 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:39:04.056713  174943 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210814093902-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:39:04.139991  174943 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210814093902-6746 --name old-k8s-version-20210814093902-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --network old-k8s-version-20210814093902-6746 --ip 192.168.58.2 --volume old-k8s-version-20210814093902-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:39:04.626452  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Running}}
	I0814 09:39:04.671458  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:04.714884  174943 cli_runner.go:115] Run: docker exec old-k8s-version-20210814093902-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:39:04.854255  174943 oci.go:278] the created container "old-k8s-version-20210814093902-6746" has a running status.
	I0814 09:39:04.854292  174943 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa...
	I0814 09:39:05.034841  174943 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:39:05.415414  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:05.458215  174943 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:39:05.458235  174943 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210814093902-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:39:07.973443  174943 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210814093902-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.916668349s)
	I0814 09:39:07.973474  174943 kic.go:188] duration metric: took 3.916818 seconds to extract preloaded images to volume
	I0814 09:39:07.973538  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:08.011255  174943 machine.go:88] provisioning docker machine ...
	I0814 09:39:08.011292  174943 ubuntu.go:169] provisioning hostname "old-k8s-version-20210814093902-6746"
	I0814 09:39:08.011362  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.048765  174943 main.go:130] libmachine: Using SSH client type: native
	I0814 09:39:08.049003  174943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0814 09:39:08.049029  174943 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210814093902-6746 && echo "old-k8s-version-20210814093902-6746" | sudo tee /etc/hostname
	I0814 09:39:08.188179  174943 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210814093902-6746
	
	I0814 09:39:08.188248  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.227743  174943 main.go:130] libmachine: Using SSH client type: native
	I0814 09:39:08.227920  174943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0814 09:39:08.227955  174943 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210814093902-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210814093902-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210814093902-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:39:08.351978  174943 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:39:08.352004  174943 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:39:08.352038  174943 ubuntu.go:177] setting up certificates
	I0814 09:39:08.352048  174943 provision.go:83] configureAuth start
	I0814 09:39:08.352093  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:08.390983  174943 provision.go:138] copyHostCerts
	I0814 09:39:08.391046  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:39:08.391054  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:39:08.391107  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:39:08.391186  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:39:08.391201  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:39:08.391222  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:39:08.391269  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:39:08.391276  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:39:08.391293  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:39:08.391328  174943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210814093902-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210814093902-6746]
	I0814 09:39:08.503869  174943 provision.go:172] copyRemoteCerts
	I0814 09:39:08.503920  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:39:08.503953  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.542791  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.632044  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:39:08.648011  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0814 09:39:08.663392  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:39:08.678374  174943 provision.go:86] duration metric: configureAuth took 326.31671ms
	I0814 09:39:08.678393  174943 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:39:08.678545  174943 config.go:177] Loaded profile config "old-k8s-version-20210814093902-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0814 09:39:08.678558  174943 machine.go:91] provisioned docker machine in 667.282302ms
	I0814 09:39:08.678567  174943 client.go:171] LocalClient.Create took 5.682731021s
	I0814 09:39:08.678591  174943 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210814093902-6746" took 5.682819008s
	I0814 09:39:08.678603  174943 start.go:267] post-start starting for "old-k8s-version-20210814093902-6746" (driver="docker")
	I0814 09:39:08.678611  174943 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:39:08.678659  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:39:08.678705  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.716852  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.803556  174943 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:39:08.806084  174943 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:39:08.806105  174943 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:39:08.806120  174943 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:39:08.806127  174943 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:39:08.806137  174943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:39:08.806181  174943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:39:08.806271  174943 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:39:08.806379  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:39:08.812467  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:39:08.827995  174943 start.go:270] post-start completed in 149.382379ms
	I0814 09:39:08.828361  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:08.867577  174943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json ...
	I0814 09:39:08.867772  174943 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:39:08.867807  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.905591  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.993182  174943 start.go:129] duration metric: createHost completed in 5.999767131s
	I0814 09:39:08.993204  174943 start.go:80] releasing machines lock for "old-k8s-version-20210814093902-6746", held for 5.999907972s
	I0814 09:39:08.993277  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:09.031752  174943 ssh_runner.go:149] Run: systemctl --version
	I0814 09:39:09.031796  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:09.031857  174943 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:39:09.031938  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:09.070624  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:09.073066  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:09.156561  174943 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:39:09.195165  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:39:09.204055  174943 docker.go:153] disabling docker service ...
	I0814 09:39:09.204109  174943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:39:09.218986  174943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:39:09.227073  174943 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:39:09.292773  174943 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:39:09.350837  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:39:09.359574  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:39:09.371511  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiC
iAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:39:09.383254  174943 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:39:09.388925  174943 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:39:09.388993  174943 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:39:09.395292  174943 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:39:09.401003  174943 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:39:09.455625  174943 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:39:09.517464  174943 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:39:09.517527  174943 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:39:09.520948  174943 start.go:413] Will wait 60s for crictl version
	I0814 09:39:09.521000  174943 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:39:09.543774  174943 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:39:09Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:39:20.590544  174943 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:39:20.612712  174943 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:39:20.612770  174943 ssh_runner.go:149] Run: containerd --version
	I0814 09:39:20.634809  174943 ssh_runner.go:149] Run: containerd --version
	I0814 09:39:20.656509  174943 out.go:177] * Preparing Kubernetes v1.14.0 on containerd 1.4.9 ...
	I0814 09:39:20.656580  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:39:20.693435  174943 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:39:20.696553  174943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:39:20.706069  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:20.706115  174943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:39:20.727361  174943 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:39:20.727378  174943 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:39:20.727410  174943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:39:20.747946  174943 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:39:20.747961  174943 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:39:20.748000  174943 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:39:20.768036  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:20.768056  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:20.768065  174943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:39:20.768078  174943 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210814093902-6746 NodeName:old-k8s-version-20210814093902-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:39:20.768186  174943 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20210814093902-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210814093902-6746
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:39:20.768270  174943 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20210814093902-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:39:20.768309  174943 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0814 09:39:20.774391  174943 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:39:20.774447  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:39:20.780415  174943 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (652 bytes)
	I0814 09:39:20.791377  174943 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:39:20.802514  174943 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0814 09:39:20.813779  174943 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:39:20.816320  174943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:39:20.824300  174943 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746 for IP: 192.168.58.2
	I0814 09:39:20.824348  174943 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:39:20.824369  174943 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:39:20.824424  174943 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key
	I0814 09:39:20.824434  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt with IP's: []
	I0814 09:39:21.018286  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt ...
	I0814 09:39:21.018312  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: {Name:mk4ac4b7e75286e8b79fd45038770d55847165f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.018501  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key ...
	I0814 09:39:21.018513  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key: {Name:mkfcf32e4fa8803fcf65dae722ac7ab1f6cf1297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.018594  174943 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041
	I0814 09:39:21.018604  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:39:21.104538  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 ...
	I0814 09:39:21.104563  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041: {Name:mk874ad2629ebee729aa095bca20e1e9bf8bb4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.104717  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041 ...
	I0814 09:39:21.104730  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041: {Name:mk1727c0c17503796e9734870659a2a374768265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.104817  174943 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt
	I0814 09:39:21.104885  174943 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key
	I0814 09:39:21.104937  174943 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key
	I0814 09:39:21.104946  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt with IP's: []
	I0814 09:39:21.258449  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt ...
	I0814 09:39:21.258494  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt: {Name:mk2f05e5d279ba24cfb466b9815a114a874915fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.258659  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key ...
	I0814 09:39:21.258672  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key: {Name:mk44a042ae30bbae120b4e022a4a2b2eb4d2ee31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.258831  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:39:21.258867  174943 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:39:21.258878  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:39:21.258899  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:39:21.258922  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:39:21.258944  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:39:21.258985  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:39:21.259945  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:39:21.276890  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 09:39:21.316234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:39:21.331459  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:39:21.346701  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:39:21.361781  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:39:21.376654  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:39:21.391554  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:39:21.406234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:39:21.421234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:39:21.436252  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:39:21.451359  174943 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:39:21.462306  174943 ssh_runner.go:149] Run: openssl version
	I0814 09:39:21.466604  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:39:21.472871  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.475534  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.475567  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.479760  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:39:21.486094  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:39:21.492443  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.495062  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.495096  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.499238  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:39:21.505598  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:39:21.511951  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.514698  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.514734  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.518872  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:39:21.525190  174943 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:39:21.525270  174943 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:39:21.525302  174943 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:39:21.549326  174943 cri.go:76] found id: ""
	I0814 09:39:21.549383  174943 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:39:21.555843  174943 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:39:21.562038  174943 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:39:21.562091  174943 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:39:21.567941  174943 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:39:21.567973  174943 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:39:21.866263  174943 out.go:204]   - Generating certificates and keys ...
	I0814 09:39:23.763238  174943 out.go:204]   - Booting up control plane ...
	I0814 09:39:33.306959  174943 out.go:204]   - Configuring RBAC rules ...
	I0814 09:39:33.721817  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:33.721846  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:33.723323  174943 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:39:33.723409  174943 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:39:33.726795  174943 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0814 09:39:33.726810  174943 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:39:33.738760  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:39:34.040132  174943 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:39:34.040255  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.040255  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=old-k8s-version-20210814093902-6746 minikube.k8s.io/updated_at=2021_08_14T09_39_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.055567  174943 ops.go:34] apiserver oom_adj: 16
	I0814 09:39:34.055587  174943 ops.go:39] adjusting apiserver oom_adj to -10
	I0814 09:39:34.055600  174943 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:39:34.148540  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.747637  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:35.247207  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:35.747299  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:36.247652  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:36.747784  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:38.872980  174943 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.125158429s)
	I0814 09:39:39.748032  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a747d02c26253       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       0                   0eed254b3316c
	ef9cd508c4bcf       296a6d5035e2d       2 minutes ago       Running             coredns                   0                   79704c1ba1377
	9753722af7745       6de166512aa22       3 minutes ago       Running             kindnet-cni               0                   e9f1ed022aae0
	66b515b3e4a14       adb2816ea823a       3 minutes ago       Running             kube-proxy                0                   63ba7b0ef4459
	0fcd2105780a3       bc2bb319a7038       3 minutes ago       Running             kube-controller-manager   0                   74d460f2e7a7f
	8bcc07d573eb1       0369cf4303ffd       3 minutes ago       Running             etcd                      0                   60a80199b4a57
	d3bf648d26067       6be0dc1302e30       3 minutes ago       Running             kube-scheduler            0                   faadff72e3a9c
	ab29adb23277d       3d174f00aa39e       3 minutes ago       Running             kube-apiserver            0                   7b9c957209d40
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:39:47 UTC. --
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.541959125Z" level=info msg="Connect containerd service"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542013371Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542632511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542708189Z" level=info msg="Start subscribing containerd event"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542770137Z" level=info msg="Start recovering state"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542857642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542918689Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542967809Z" level=info msg="containerd successfully booted in 0.040983s"
	Aug 14 09:36:58 pause-20210814093545-6746 systemd[1]: Started containerd container runtime.
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625754612Z" level=info msg="Start event monitor"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625793205Z" level=info msg="Start snapshots syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625802136Z" level=info msg="Start cni network conf syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625807599Z" level=info msg="Start streaming server"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.008705018Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.026044573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 pid=2510
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.163827167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.166278582Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.228933904Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.229330725Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.371077991Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\" returns successfully"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449187715Z" level=info msg="Finish piping stderr of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449222888Z" level=info msg="Finish piping stdout of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.450533507Z" level=info msg="TaskExit event &TaskExit{ContainerID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,Pid:2562,ExitStatus:255,ExitedAt:2021-08-14 09:37:32.450264852 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501408095Z" level=info msg="shim disconnected" id=a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501502681Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug14 09:29] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth38d0eb85
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a bd 7c 39 49 62 08 06        ........|9Ib..
	[Aug14 09:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:32] cgroup: cgroup2: unknown option "nsdelegate"
	[ +13.411048] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.035402] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:33] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.451942] cgroup: cgroup2: unknown option "nsdelegate"
	[ +14.641136] tee (136175): /proc/134359/oom_adj is deprecated, please use /proc/134359/oom_score_adj instead.
	[Aug14 09:34] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.573195] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe29e5784
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 4c 1a e2 69 4b 08 06        .......L..iK..
	[  +8.954711] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:35] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth529d8992
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 22 4f ef 2e 27 f0 08 06        ......"O..'...
	[  +9.430011] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:36] cgroup: cgroup2: unknown option "nsdelegate"
	[ +36.823390] cgroup: cgroup2: unknown option "nsdelegate"
	[ +15.237179] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth43e4fc69
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 7b 35 3d 7d 88 08 06        .......{5=}...
	[Aug14 09:37] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:38] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd8221cd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 44 cc a6 70 5e 08 06        .......D..p^..
	
	* 
	* ==> etcd [8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed] <==
	* 2021-08-14 09:36:15.337412 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-14 09:36:15.337510 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-14 09:36:15.338236 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:36:15.338273 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-14 09:36:29.373341 W | etcdserver: request "header:<ID:8128006959566550151 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:70cc7b4404ddf486>" with result "size:42" took too long (722.758009ms) to execute
	2021-08-14 09:36:29.375095 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210814093545-6746\" " with result "range_response_count:1 size:3970" took too long (1.316088805s) to execute
	2021-08-14 09:36:35.339357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:36:40.102148 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:36:50.101987 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:00.102036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:10.102324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:37:10.796456 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (540.033169ms) to execute
	2021-08-14 09:37:10.796484 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (227.671703ms) to execute
	2021-08-14 09:37:10.796572 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (157.24978ms) to execute
	2021-08-14 09:37:13.568821 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000092497s) to execute
	2021-08-14 09:37:13.954357 W | wal: sync duration of 3.152713958s, expected less than 1s
	2021-08-14 09:37:13.955117 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:37466" took too long (3.15117799s) to execute
	2021-08-14 09:37:15.221449 W | wal: sync duration of 1.25762747s, expected less than 1s
	2021-08-14 09:37:16.767289 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3959" took too long (2.77113478s) to execute
	2021-08-14 09:37:16.767332 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (2.810616713s) to execute
	2021-08-14 09:37:16.767593 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (3.188957483s) to execute
	2021-08-14 09:37:16.767688 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.180831329s) to execute
	2021-08-14 09:37:16.767919 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-pause-20210814093545-6746.169b22b0f0946975\" " with result "range_response_count:1 size:863" took too long (1.180140558s) to execute
	2021-08-14 09:37:16.768090 W | etcdserver: request "header:<ID:8128006959566550730 > lease_revoke:<id:70cc7b4404ddf6a8>" with result "size:29" took too long (854.719058ms) to execute
	2021-08-14 09:37:16.768297 W | etcdserver: read-only range request "key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true " with result "range_response_count:0 size:5" took too long (837.263392ms) to execute
	
	* 
	* ==> kernel <==
	*  09:40:48 up  1:23,  0 users,  load average: 0.80, 2.09, 1.74
	Linux pause-20210814093545-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086] <==
	* W0814 09:40:44.892491       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:44.963204       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.034803       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.089475       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.119628       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.224345       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.238875       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.406889       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.411347       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.524407       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:45.572152       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:46.076530       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:46.194960       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:46.531709       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:46.613982       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:46.784982       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:47.107086       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0814 09:40:47.945729       1 trace.go:205] Trace[165450648]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:39:47.945) (total time: 60000ms):
	Trace[165450648]: [1m0.000281407s] [1m0.000281407s] END
	E0814 09:40:47.945758       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:40:47.945812       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:40:47.947224       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:40:47.948315       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:40:47.949813       1 trace.go:205] Trace[87157100]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:39:47.945) (total time: 60004ms):
	Trace[87157100]: [1m0.004385913s] [1m0.004385913s] END
	
	* 
	* ==> kube-controller-manager [0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216] <==
	* I0814 09:36:35.558636       1 shared_informer.go:247] Caches are synced for cronjob 
	I0814 09:36:35.594056       1 shared_informer.go:247] Caches are synced for disruption 
	I0814 09:36:35.594080       1 disruption.go:371] Sending events to api server.
	I0814 09:36:35.618269       1 shared_informer.go:247] Caches are synced for attach detach 
	I0814 09:36:35.626411       1 shared_informer.go:247] Caches are synced for PV protection 
	I0814 09:36:35.658052       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0814 09:36:35.658104       1 shared_informer.go:247] Caches are synced for expand 
	I0814 09:36:35.666214       1 shared_informer.go:247] Caches are synced for endpoint 
	I0814 09:36:35.666883       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tbw9g"
	I0814 09:36:35.670094       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zgc2h"
	I0814 09:36:35.736985       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0814 09:36:35.758656       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0814 09:36:35.758886       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0814 09:36:35.767757       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.807894       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0814 09:36:35.810106       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.813582       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0814 09:36:36.206921       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.206946       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:36:36.237226       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.328706       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0814 09:36:36.413536       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:36.418045       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7njgj"
	I0814 09:36:36.433569       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:50.510455       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97] <==
	* I0814 09:36:37.040930       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:36:37.040978       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:36:37.041009       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:36:37.135620       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:36:37.135697       1 server_others.go:212] Using iptables Proxier.
	I0814 09:36:37.135734       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:36:37.135764       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:36:37.136453       1 server.go:643] Version: v1.21.3
	I0814 09:36:37.137196       1 config.go:315] Starting service config controller
	I0814 09:36:37.138165       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:36:37.139739       1 config.go:224] Starting endpoint slice config controller
	I0814 09:36:37.139765       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:36:37.141550       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:36:37.142664       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:36:37.240414       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:36:37.240445       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119] <==
	* I0814 09:36:19.239734       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.239780       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.240100       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:36:19.240128       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:36:19.310200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:19.310391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:36:19.310488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.310570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:36:19.310642       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:36:19.310718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:36:19.310788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:19.310860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:36:19.310940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:19.311017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:36:19.312650       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:36:20.163865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.194900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:20.263532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.308670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:20.382950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:20.414192       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0814 09:36:23.340697       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:40:48 UTC. --
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827448    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35667363-ef4b-4333-af82-ae0a5645f03c-xtables-lock\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827473    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b76115f-19df-4554-87f1-b88734b7e601-xtables-lock\") pod \"kube-proxy-zgc2h\" (UID: \"2b76115f-19df-4554-87f1-b88734b7e601\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:35.827529    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtcs\" (UniqueName: \"kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") "
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933751    1268 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933781    1268 projected.go:199] Error preparing data for projected volume kube-api-access-99zbk for pod kube-system/kube-proxy-zgc2h: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.933854    1268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/2b76115f-19df-4554-87f1-b88734b7e601-kube-api-access-99zbk podName:2b76115f-19df-4554-87f1-b88734b7e601 nodeName:}" failed. No retries permitted until 2021-08-14 09:36:36.43382935 +0000 UTC m=+14.220654883 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-99zbk\" (UniqueName: \"kubernetes.io/projected/2b76115f-19df-4554-87f1-b88734b7e601-kube-api-access-99zbk\") pod \"kube-proxy-zgc2h\" (UID: \"2b76115f-19df-4554-87f1-b88734b7e601\") : configmap \"kube-root-ca.crt\" not found"
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934499    1268 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934531    1268 projected.go:199] Error preparing data for projected volume kube-api-access-kqtcs for pod kube-system/kindnet-tbw9g: configmap "kube-root-ca.crt" not found
	Aug 14 09:36:35 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:35.934605    1268 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs podName:35667363-ef4b-4333-af82-ae0a5645f03c nodeName:}" failed. No retries permitted until 2021-08-14 09:36:36.434579072 +0000 UTC m=+14.221404600 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-kqtcs\" (UniqueName: \"kubernetes.io/projected/35667363-ef4b-4333-af82-ae0a5645f03c-kube-api-access-kqtcs\") pod \"kindnet-tbw9g\" (UID: \"35667363-ef4b-4333-af82-ae0a5645f03c\") : configmap \"kube-root-ca.crt\" not found"
	Aug 14 09:36:37 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:37.880472    1268 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.578728    1268 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.761755    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4-config-volume\") pod \"coredns-558bd4d5db-7njgj\" (UID: \"5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4\") "
	Aug 14 09:36:53 pause-20210814093545-6746 kubelet[1268]: I0814 09:36:53.761812    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j45c\" (UniqueName: \"kubernetes.io/projected/5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4-kube-api-access-8j45c\") pod \"coredns-558bd4d5db-7njgj\" (UID: \"5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4\") "
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: W0814 09:36:58.476551    1268 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: W0814 09:36:58.476585    1268 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856585    1268 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856636    1268 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:36:58 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:58.856654    1268 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:36:59 pause-20210814093545-6746 kubelet[1268]: E0814 09:36:59.441448    1268 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.405402    1268 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.604028    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2xz\" (UniqueName: \"kubernetes.io/projected/80eca970-b4ab-4ac8-af20-f814411672fb-kube-api-access-tm2xz\") pod \"storage-provisioner\" (UID: \"80eca970-b4ab-4ac8-af20-f814411672fb\") "
	Aug 14 09:37:17 pause-20210814093545-6746 kubelet[1268]: I0814 09:37:17.604122    1268 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/80eca970-b4ab-4ac8-af20-f814411672fb-tmp\") pod \"storage-provisioner\" (UID: \"80eca970-b4ab-4ac8-af20-f814411672fb\") "
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:37:18 pause-20210814093545-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 154 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00013b210, 0xc000000002)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00013b200)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00052a720, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000140f00, 0x18e5530, 0xc00051d0c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000362440)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000362440, 0x18b3d60, 0xc000708690, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000362440, 0x3b9aca00, 0x0, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000362440, 0x3b9aca00, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:40:47.949679  178483 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (92.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (14.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210814093545-6746 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210814093545-6746 --alsologtostderr -v=5: exit status 80 (5.498938174s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210814093545-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:40:49.001729  180884 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:40:49.001847  180884 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:40:49.001860  180884 out.go:311] Setting ErrFile to fd 2...
	I0814 09:40:49.001865  180884 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:40:49.002012  180884 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:40:49.002248  180884 out.go:305] Setting JSON to false
	I0814 09:40:49.002270  180884 mustload.go:65] Loading cluster: pause-20210814093545-6746
	I0814 09:40:49.002661  180884 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:40:49.003203  180884 cli_runner.go:115] Run: docker container inspect pause-20210814093545-6746 --format={{.State.Status}}
	I0814 09:40:49.045934  180884 host.go:66] Checking if "pause-20210814093545-6746" exists ...
	I0814 09:40:49.046885  180884 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210814093545-6746 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0814 09:40:49.048992  180884 out.go:177] * Pausing node pause-20210814093545-6746 ... 
	I0814 09:40:49.049024  180884 host.go:66] Checking if "pause-20210814093545-6746" exists ...
	I0814 09:40:49.049273  180884 ssh_runner.go:149] Run: systemctl --version
	I0814 09:40:49.049314  180884 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210814093545-6746
	I0814 09:40:49.087889  180884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/pause-20210814093545-6746/id_rsa Username:docker}
	I0814 09:40:49.180577  180884 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:40:49.189675  180884 pause.go:50] kubelet running: true
	I0814 09:40:49.189732  180884 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:40:54.312527  180884 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.122772816s)
	I0814 09:40:54.312573  180884 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:40:54.312633  180884 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:40:54.379668  180884 cri.go:76] found id: "a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564"
	I0814 09:40:54.379699  180884 cri.go:76] found id: "ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431"
	I0814 09:40:54.379706  180884 cri.go:76] found id: "9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38"
	I0814 09:40:54.379710  180884 cri.go:76] found id: "66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97"
	I0814 09:40:54.379714  180884 cri.go:76] found id: "0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216"
	I0814 09:40:54.379718  180884 cri.go:76] found id: "8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed"
	I0814 09:40:54.379721  180884 cri.go:76] found id: "d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119"
	I0814 09:40:54.379725  180884 cri.go:76] found id: "ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086"
	I0814 09:40:54.379729  180884 cri.go:76] found id: ""
	I0814 09:40:54.379775  180884 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:40:54.411572  180884 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","pid":2531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22/rootfs","created":"2021-08-14T09:37:18.1369696Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_80eca970-b4ab-4ac8-af20-f814411672fb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216","pid":1158,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d
0a6aac05dea8ef621aa6bb0216","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216/rootfs","created":"2021-08-14T09:36:14.761009913Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","pid":1016,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171/rootfs","created":"2021-08-14T09:36:14.461002641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff4
7a7f5fc45d171","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210814093545-6746_f11ebb5af93764eea1676b8a16cd11fe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","pid":1593,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7/rootfs","created":"2021-08-14T09:36:36.720976391Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zgc2h_2b76115f-19df-4554-87f1-b88734b7e601"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","pid":1642,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97/rootfs","created":"2021-08-14T09:36:36.901095693Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","pid":1017,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6/rootfs","created":"2021-08-14T09:36:14.461001587Z","annotations":{"io.kubernetes.cri.container-type":
"sandbox","io.kubernetes.cri.sandbox-id":"74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210814093545-6746_f38a3f341ed6042c55d7f17229a2a5a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","pid":1928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca/rootfs","created":"2021-08-14T09:36:54.052967447Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-7njgj_5ea798ce-e21f-4e7a-a7fb-3c3c24f091c4"},"owner":"root"},{"o
ciVersion":"1.0.2-dev","id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","pid":1005,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3/rootfs","created":"2021-08-14T09:36:14.436983353Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210814093545-6746_def6f5caa1dfaea021514c05e476f85c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","pid":1157,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed","rootfs":"/run/
containerd/io.containerd.runtime.v2.task/k8s.io/8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed/rootfs","created":"2021-08-14T09:36:14.760946456Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","pid":1778,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38/rootfs","created":"2021-08-14T09:36:37.404999146Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b
20"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","pid":1109,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086/rootfs","created":"2021-08-14T09:36:14.653021099Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","pid":1116,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3bf648d2606793756
e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119/rootfs","created":"2021-08-14T09:36:14.708999492Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","pid":1601,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20/rootfs","created":"2021-08-14T09:36:37.001965799Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-tbw9g_35667363-ef4b-4333-a
f82-ae0a5645f03c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","pid":1960,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431/rootfs","created":"2021-08-14T09:36:54.260969809Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","pid":1012,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/faadff72e3a
9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd/rootfs","created":"2021-08-14T09:36:14.460995721Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210814093545-6746_a45cdcfbe723180b68e8cf5ee8920aa4"},"owner":"root"}]
	I0814 09:40:54.411831  180884 cri.go:113] list returned 15 containers
	I0814 09:40:54.411846  180884 cri.go:116] container: {ID:0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 Status:running}
	I0814 09:40:54.411859  180884 cri.go:118] skipping 0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 - not in ps
	I0814 09:40:54.411867  180884 cri.go:116] container: {ID:0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 Status:running}
	I0814 09:40:54.411878  180884 cri.go:116] container: {ID:60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 Status:running}
	I0814 09:40:54.411887  180884 cri.go:118] skipping 60a80199b4a57bf9ef5a816ffaa2c9a35a0fa4252affede8ff47a7f5fc45d171 - not in ps
	I0814 09:40:54.411896  180884 cri.go:116] container: {ID:63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 Status:running}
	I0814 09:40:54.411905  180884 cri.go:118] skipping 63ba7b0ef4459f7e97061a1b10cecb8086818e3d940a4f1f0106c5d5591c20e7 - not in ps
	I0814 09:40:54.411910  180884 cri.go:116] container: {ID:66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97 Status:running}
	I0814 09:40:54.411919  180884 cri.go:116] container: {ID:74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 Status:running}
	I0814 09:40:54.411928  180884 cri.go:118] skipping 74d460f2e7a7f32ec04c903ddb75e6fe99348d24ee0febdbd696e78e60b93bb6 - not in ps
	I0814 09:40:54.411932  180884 cri.go:116] container: {ID:79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca Status:running}
	I0814 09:40:54.411943  180884 cri.go:118] skipping 79704c1ba13773617b39655660218def22f6ad9809cf01b72df4ad02bb04deca - not in ps
	I0814 09:40:54.411952  180884 cri.go:116] container: {ID:7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 Status:running}
	I0814 09:40:54.411959  180884 cri.go:118] skipping 7b9c957209d40da6c7c5cb3bfb80a5901c5172b63623a8b4076edd88b3c46ac3 - not in ps
	I0814 09:40:54.411968  180884 cri.go:116] container: {ID:8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed Status:running}
	I0814 09:40:54.411978  180884 cri.go:116] container: {ID:9753722af7745d71bc4071b83e3ae315dc99214efdeb714ab3c08565d9934c38 Status:running}
	I0814 09:40:54.411986  180884 cri.go:116] container: {ID:ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086 Status:running}
	I0814 09:40:54.411996  180884 cri.go:116] container: {ID:d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119 Status:running}
	I0814 09:40:54.412004  180884 cri.go:116] container: {ID:e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 Status:running}
	I0814 09:40:54.412014  180884 cri.go:118] skipping e9f1ed022aae02b70335f8569940160db69d16e522fecace0a3c80e168411b20 - not in ps
	I0814 09:40:54.412023  180884 cri.go:116] container: {ID:ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431 Status:running}
	I0814 09:40:54.412031  180884 cri.go:116] container: {ID:faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd Status:running}
	I0814 09:40:54.412041  180884 cri.go:118] skipping faadff72e3a9cbc537785816ad806d9f7d3190a961cbe21b1b5e472bbb527ddd - not in ps
	I0814 09:40:54.412085  180884 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216
	I0814 09:40:54.425786  180884 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97
	I0814 09:40:54.440329  180884 out.go:177] 
	W0814 09:40:54.440460  180884 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:40:54Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216 66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:40:54Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0814 09:40:54.440473  180884 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0814 09:40:54.442973  180884 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0814 09:40:54.445374  180884 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210814093545-6746 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210814093545-6746
helpers_test.go:236: (dbg) docker inspect pause-20210814093545-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c",
	        "Created": "2021-08-14T09:35:47.328510764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:35:47.788540698Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hostname",
	        "HostsPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hosts",
	        "LogPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c-json.log",
	        "Name": "/pause-20210814093545-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210814093545-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210814093545-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210814093545-6746",
	                "Source": "/var/lib/docker/volumes/pause-20210814093545-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210814093545-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "name.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ccc2af153ef9d917059bc8c4f07b140ac515f4a831ba1bf6c90b0246a3c1997",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1ccc2af153ef",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210814093545-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "348c3cd444d9"
	                    ],
	                    "NetworkID": "d1c345d3493c76f3a399eb72a44a3805f583371e015cb9c75f513d1b9430742c",
	                    "EndpointID": "7f43385a3cfb69f0364951734129c7173a9f54c3b30297f57443926db80f5d72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746: exit status 2 (307.535583ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25: (6.726906271s)
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                       | insufficient-storage-20210814093219-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:26 UTC | Sat, 14 Aug 2021 09:32:32 UTC |
	|         | insufficient-storage-20210814093219-6746 |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:33:38 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:38 UTC | Sat, 14 Aug 2021 09:33:59 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:34:08 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | offline-containerd-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:34:08 UTC | Sat, 14 Aug 2021 09:34:11 UTC |
	|         | offline-containerd-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:59 UTC | Sat, 14 Aug 2021 09:35:00 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| start   | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:01 UTC | Sat, 14 Aug 2021 09:35:42 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	|         | --memory=2200                            |                                          |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0        |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| delete  | -p                                       | kubernetes-upgrade-20210814093232-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:35:45 UTC |
	|         | kubernetes-upgrade-20210814093232-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:36:31 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| delete  | -p                                       | missing-upgrade-20210814093411-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:31 UTC | Sat, 14 Aug 2021 09:36:34 UTC |
	|         | missing-upgrade-20210814093411-6746      |                                          |         |         |                               |                               |
	| delete  | -p kubenet-20210814093634-6746           | kubenet-20210814093634-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:34 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p flannel-20210814093635-6746           | flannel-20210814093635-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p false-20210814093635-6746             | false-20210814093635-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:36 UTC |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:45 UTC | Sat, 14 Aug 2021 09:36:56 UTC |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --install-addons=false                   |                                          |         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:56 UTC | Sat, 14 Aug 2021 09:37:18 UTC |
	|         | --alsologtostderr                        |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:36 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |         |         |                               |                               |
	|         |  --container-runtime=containerd          |                                          |         |         |                               |                               |
	| -p      | force-systemd-flag-20210814093636-6746   | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210814093636-6746   | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:28 UTC |
	|         | force-systemd-flag-20210814093636-6746   |                                          |         |         |                               |                               |
	| start   | -p                                       | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:28 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | force-systemd-env-20210814093728-6746    |                                          |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr          |                                          |         |         |                               |                               |
	|         | -v=5 --driver=docker                     |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | force-systemd-env-20210814093728-6746    | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | ssh cat /etc/containerd/config.toml      |                                          |         |         |                               |                               |
	| delete  | -p                                       | force-systemd-env-20210814093728-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:15 UTC |
	|         | force-systemd-env-20210814093728-6746    |                                          |         |         |                               |                               |
	| start   | -p                                       | cert-options-20210814093815-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:15 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | cert-options-20210814093815-6746         |                                          |         |         |                               |                               |
	|         | --memory=2048                            |                                          |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                |                                          |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15            |                                          |         |         |                               |                               |
	|         | --apiserver-names=localhost              |                                          |         |         |                               |                               |
	|         | --apiserver-names=www.google.com         |                                          |         |         |                               |                               |
	|         | --apiserver-port=8555                    |                                          |         |         |                               |                               |
	|         | --driver=docker                          |                                          |         |         |                               |                               |
	|         | --container-runtime=containerd           |                                          |         |         |                               |                               |
	| -p      | cert-options-20210814093815-6746         | cert-options-20210814093815-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | ssh openssl x509 -text -noout -in        |                                          |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt    |                                          |         |         |                               |                               |
	| delete  | -p                                       | cert-options-20210814093815-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:39:02 UTC |
	|         | cert-options-20210814093815-6746         |                                          |         |         |                               |                               |
	| unpause | -p pause-20210814093545-6746             | pause-20210814093545-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:48 UTC | Sat, 14 Aug 2021 09:40:48 UTC |
	|         | --alsologtostderr -v=5                   |                                          |         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:39:02
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:39:02.663117  174943 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:39:02.663193  174943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:39:02.663215  174943 out.go:311] Setting ErrFile to fd 2...
	I0814 09:39:02.663219  174943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:39:02.663297  174943 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:39:02.663528  174943 out.go:305] Setting JSON to false
	I0814 09:39:02.698199  174943 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4905,"bootTime":1628929038,"procs":253,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:39:02.698265  174943 start.go:121] virtualization: kvm guest
	I0814 09:39:02.701044  174943 out.go:177] * [old-k8s-version-20210814093902-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:39:02.702691  174943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:39:02.701175  174943 notify.go:169] Checking for updates...
	I0814 09:39:02.704326  174943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:39:02.705770  174943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:39:02.707156  174943 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:39:02.707588  174943 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:39:02.707661  174943 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:39:02.707716  174943 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:39:02.707743  174943 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:39:02.757772  174943 docker.go:132] docker version: linux-19.03.15
	I0814 09:39:02.757846  174943 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:39:02.834009  174943 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:39:02.792005104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:39:02.834125  174943 docker.go:244] overlay module found
	I0814 09:39:02.836097  174943 out.go:177] * Using the docker driver based on user configuration
	I0814 09:39:02.836120  174943 start.go:278] selected driver: docker
	I0814 09:39:02.836125  174943 start.go:751] validating driver "docker" against <nil>
	I0814 09:39:02.836141  174943 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:39:02.836197  174943 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:39:02.836214  174943 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:39:02.837730  174943 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:39:02.838481  174943 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:39:02.915948  174943 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:39:02.872598918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:39:02.916078  174943 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:39:02.916214  174943 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:39:02.916233  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:02.916238  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:02.916244  174943 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:39:02.916251  174943 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:39:02.916255  174943 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:39:02.916263  174943 start_flags.go:277] config:
	{Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:39:02.918359  174943 out.go:177] * Starting control plane node old-k8s-version-20210814093902-6746 in cluster old-k8s-version-20210814093902-6746
	I0814 09:39:02.918400  174943 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:39:02.919787  174943 out.go:177] * Pulling base image ...
	I0814 09:39:02.919820  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:02.919847  174943 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:39:02.919872  174943 cache.go:56] Caching tarball of preloaded images
	I0814 09:39:02.919916  174943 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:39:02.920015  174943 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:39:02.920031  174943 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on containerd
	I0814 09:39:02.920134  174943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json ...
	I0814 09:39:02.920153  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json: {Name:mk04a532e2ac4420a6fb8880a4de59459858f2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:02.993067  174943 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:39:02.993094  174943 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:39:02.993112  174943 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:39:02.993157  174943 start.go:313] acquiring machines lock for old-k8s-version-20210814093902-6746: {Name:mk8e2fe5e854673f5d1990fabb56ddd331c139dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:39:02.993284  174943 start.go:317] acquired machines lock for "old-k8s-version-20210814093902-6746" in 106.703µs
	I0814 09:39:02.993313  174943 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0814 09:39:02.993400  174943 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:39:02.995534  174943 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0814 09:39:02.995772  174943 start.go:160] libmachine.API.Create for "old-k8s-version-20210814093902-6746" (driver="docker")
	I0814 09:39:02.995827  174943 client.go:168] LocalClient.Create starting
	I0814 09:39:02.995889  174943 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:39:02.995931  174943 main.go:130] libmachine: Decoding PEM data...
	I0814 09:39:02.995957  174943 main.go:130] libmachine: Parsing certificate...
	I0814 09:39:02.996078  174943 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:39:02.996106  174943 main.go:130] libmachine: Decoding PEM data...
	I0814 09:39:02.996128  174943 main.go:130] libmachine: Parsing certificate...
	I0814 09:39:02.996510  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:39:03.033171  174943 cli_runner.go:162] docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:39:03.033249  174943 network_create.go:255] running [docker network inspect old-k8s-version-20210814093902-6746] to gather additional debugging logs...
	I0814 09:39:03.033270  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746
	W0814 09:39:03.069356  174943 cli_runner.go:162] docker network inspect old-k8s-version-20210814093902-6746 returned with exit code 1
	I0814 09:39:03.069384  174943 network_create.go:258] error running [docker network inspect old-k8s-version-20210814093902-6746]: docker network inspect old-k8s-version-20210814093902-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20210814093902-6746
	I0814 09:39:03.069405  174943 network_create.go:260] output of [docker network inspect old-k8s-version-20210814093902-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20210814093902-6746
	
	** /stderr **
	I0814 09:39:03.069444  174943 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:39:03.106608  174943 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-d1c345d3493c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:f5:52:80}}
	I0814 09:39:03.107937  174943 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000a10cb0] misses:0}
	I0814 09:39:03.107986  174943 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:39:03.108002  174943 network_create.go:106] attempt to create docker network old-k8s-version-20210814093902-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:39:03.108057  174943 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210814093902-6746
	I0814 09:39:03.177694  174943 network_create.go:90] docker network old-k8s-version-20210814093902-6746 192.168.58.0/24 created
	I0814 09:39:03.177723  174943 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20210814093902-6746" container
	I0814 09:39:03.177793  174943 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:39:03.215811  174943 cli_runner.go:115] Run: docker volume create old-k8s-version-20210814093902-6746 --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:39:03.253142  174943 oci.go:102] Successfully created a docker volume old-k8s-version-20210814093902-6746
	I0814 09:39:03.253222  174943 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210814093902-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --entrypoint /usr/bin/test -v old-k8s-version-20210814093902-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:39:04.056508  174943 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210814093902-6746
	W0814 09:39:04.056559  174943 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:39:04.056594  174943 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:39:04.056630  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:04.056654  174943 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:39:04.056655  174943 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:39:04.056713  174943 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210814093902-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:39:04.139991  174943 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210814093902-6746 --name old-k8s-version-20210814093902-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --network old-k8s-version-20210814093902-6746 --ip 192.168.58.2 --volume old-k8s-version-20210814093902-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:39:04.626452  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Running}}
	I0814 09:39:04.671458  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:04.714884  174943 cli_runner.go:115] Run: docker exec old-k8s-version-20210814093902-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:39:04.854255  174943 oci.go:278] the created container "old-k8s-version-20210814093902-6746" has a running status.
	I0814 09:39:04.854292  174943 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa...
	I0814 09:39:05.034841  174943 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:39:05.415414  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:05.458215  174943 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:39:05.458235  174943 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210814093902-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:39:07.973443  174943 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210814093902-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.916668349s)
	I0814 09:39:07.973474  174943 kic.go:188] duration metric: took 3.916818 seconds to extract preloaded images to volume
	I0814 09:39:07.973538  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:08.011255  174943 machine.go:88] provisioning docker machine ...
	I0814 09:39:08.011292  174943 ubuntu.go:169] provisioning hostname "old-k8s-version-20210814093902-6746"
	I0814 09:39:08.011362  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.048765  174943 main.go:130] libmachine: Using SSH client type: native
	I0814 09:39:08.049003  174943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0814 09:39:08.049029  174943 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210814093902-6746 && echo "old-k8s-version-20210814093902-6746" | sudo tee /etc/hostname
	I0814 09:39:08.188179  174943 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210814093902-6746
	
	I0814 09:39:08.188248  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.227743  174943 main.go:130] libmachine: Using SSH client type: native
	I0814 09:39:08.227920  174943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0814 09:39:08.227955  174943 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210814093902-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210814093902-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210814093902-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:39:08.351978  174943 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:39:08.352004  174943 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:39:08.352038  174943 ubuntu.go:177] setting up certificates
	I0814 09:39:08.352048  174943 provision.go:83] configureAuth start
	I0814 09:39:08.352093  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:08.390983  174943 provision.go:138] copyHostCerts
	I0814 09:39:08.391046  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:39:08.391054  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:39:08.391107  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:39:08.391186  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:39:08.391201  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:39:08.391222  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:39:08.391269  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:39:08.391276  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:39:08.391293  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:39:08.391328  174943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210814093902-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210814093902-6746]
	I0814 09:39:08.503869  174943 provision.go:172] copyRemoteCerts
	I0814 09:39:08.503920  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:39:08.503953  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.542791  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.632044  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:39:08.648011  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0814 09:39:08.663392  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:39:08.678374  174943 provision.go:86] duration metric: configureAuth took 326.31671ms
	I0814 09:39:08.678393  174943 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:39:08.678545  174943 config.go:177] Loaded profile config "old-k8s-version-20210814093902-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0814 09:39:08.678558  174943 machine.go:91] provisioned docker machine in 667.282302ms
	I0814 09:39:08.678567  174943 client.go:171] LocalClient.Create took 5.682731021s
	I0814 09:39:08.678591  174943 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210814093902-6746" took 5.682819008s
	I0814 09:39:08.678603  174943 start.go:267] post-start starting for "old-k8s-version-20210814093902-6746" (driver="docker")
	I0814 09:39:08.678611  174943 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:39:08.678659  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:39:08.678705  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.716852  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.803556  174943 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:39:08.806084  174943 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:39:08.806105  174943 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:39:08.806120  174943 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:39:08.806127  174943 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:39:08.806137  174943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:39:08.806181  174943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:39:08.806271  174943 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:39:08.806379  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:39:08.812467  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:39:08.827995  174943 start.go:270] post-start completed in 149.382379ms
	I0814 09:39:08.828361  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:08.867577  174943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json ...
	I0814 09:39:08.867772  174943 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:39:08.867807  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.905591  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.993182  174943 start.go:129] duration metric: createHost completed in 5.999767131s
	I0814 09:39:08.993204  174943 start.go:80] releasing machines lock for "old-k8s-version-20210814093902-6746", held for 5.999907972s
	I0814 09:39:08.993277  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:09.031752  174943 ssh_runner.go:149] Run: systemctl --version
	I0814 09:39:09.031796  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:09.031857  174943 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:39:09.031938  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:09.070624  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:09.073066  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:09.156561  174943 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:39:09.195165  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:39:09.204055  174943 docker.go:153] disabling docker service ...
	I0814 09:39:09.204109  174943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:39:09.218986  174943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:39:09.227073  174943 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:39:09.292773  174943 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:39:09.350837  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:39:09.359574  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:39:09.371511  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiC
iAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:39:09.383254  174943 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:39:09.388925  174943 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:39:09.388993  174943 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:39:09.395292  174943 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:39:09.401003  174943 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:39:09.455625  174943 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:39:09.517464  174943 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:39:09.517527  174943 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:39:09.520948  174943 start.go:413] Will wait 60s for crictl version
	I0814 09:39:09.521000  174943 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:39:09.543774  174943 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:39:09Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:39:20.590544  174943 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:39:20.612712  174943 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:39:20.612770  174943 ssh_runner.go:149] Run: containerd --version
	I0814 09:39:20.634809  174943 ssh_runner.go:149] Run: containerd --version
	I0814 09:39:20.656509  174943 out.go:177] * Preparing Kubernetes v1.14.0 on containerd 1.4.9 ...
	I0814 09:39:20.656580  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:39:20.693435  174943 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:39:20.696553  174943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:39:20.706069  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:20.706115  174943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:39:20.727361  174943 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:39:20.727378  174943 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:39:20.727410  174943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:39:20.747946  174943 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:39:20.747961  174943 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:39:20.748000  174943 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:39:20.768036  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:20.768056  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:20.768065  174943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:39:20.768078  174943 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210814093902-6746 NodeName:old-k8s-version-20210814093902-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:39:20.768186  174943 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20210814093902-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210814093902-6746
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:39:20.768270  174943 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20210814093902-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:39:20.768309  174943 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0814 09:39:20.774391  174943 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:39:20.774447  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:39:20.780415  174943 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (652 bytes)
	I0814 09:39:20.791377  174943 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:39:20.802514  174943 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0814 09:39:20.813779  174943 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:39:20.816320  174943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:39:20.824300  174943 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746 for IP: 192.168.58.2
	I0814 09:39:20.824348  174943 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:39:20.824369  174943 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:39:20.824424  174943 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key
	I0814 09:39:20.824434  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt with IP's: []
	I0814 09:39:21.018286  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt ...
	I0814 09:39:21.018312  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: {Name:mk4ac4b7e75286e8b79fd45038770d55847165f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.018501  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key ...
	I0814 09:39:21.018513  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key: {Name:mkfcf32e4fa8803fcf65dae722ac7ab1f6cf1297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.018594  174943 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041
	I0814 09:39:21.018604  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:39:21.104538  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 ...
	I0814 09:39:21.104563  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041: {Name:mk874ad2629ebee729aa095bca20e1e9bf8bb4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.104717  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041 ...
	I0814 09:39:21.104730  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041: {Name:mk1727c0c17503796e9734870659a2a374768265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.104817  174943 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt
	I0814 09:39:21.104885  174943 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key
	I0814 09:39:21.104937  174943 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key
	I0814 09:39:21.104946  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt with IP's: []
	I0814 09:39:21.258449  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt ...
	I0814 09:39:21.258494  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt: {Name:mk2f05e5d279ba24cfb466b9815a114a874915fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.258659  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key ...
	I0814 09:39:21.258672  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key: {Name:mk44a042ae30bbae120b4e022a4a2b2eb4d2ee31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.258831  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:39:21.258867  174943 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:39:21.258878  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:39:21.258899  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:39:21.258922  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:39:21.258944  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:39:21.258985  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:39:21.259945  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:39:21.276890  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 09:39:21.316234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:39:21.331459  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:39:21.346701  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:39:21.361781  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:39:21.376654  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:39:21.391554  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:39:21.406234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:39:21.421234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:39:21.436252  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:39:21.451359  174943 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:39:21.462306  174943 ssh_runner.go:149] Run: openssl version
	I0814 09:39:21.466604  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:39:21.472871  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.475534  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.475567  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.479760  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:39:21.486094  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:39:21.492443  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.495062  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.495096  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.499238  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:39:21.505598  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:39:21.511951  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.514698  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.514734  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.518872  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:39:21.525190  174943 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:39:21.525270  174943 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:39:21.525302  174943 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:39:21.549326  174943 cri.go:76] found id: ""
	I0814 09:39:21.549383  174943 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:39:21.555843  174943 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:39:21.562038  174943 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:39:21.562091  174943 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:39:21.567941  174943 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:39:21.567973  174943 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:39:21.866263  174943 out.go:204]   - Generating certificates and keys ...
	I0814 09:39:23.763238  174943 out.go:204]   - Booting up control plane ...
	I0814 09:39:33.306959  174943 out.go:204]   - Configuring RBAC rules ...
	I0814 09:39:33.721817  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:33.721846  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:33.723323  174943 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:39:33.723409  174943 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:39:33.726795  174943 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0814 09:39:33.726810  174943 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:39:33.738760  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:39:34.040132  174943 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:39:34.040255  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.040255  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=old-k8s-version-20210814093902-6746 minikube.k8s.io/updated_at=2021_08_14T09_39_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.055567  174943 ops.go:34] apiserver oom_adj: 16
	I0814 09:39:34.055587  174943 ops.go:39] adjusting apiserver oom_adj to -10
	I0814 09:39:34.055600  174943 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:39:34.148540  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.747637  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:35.247207  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:35.747299  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:36.247652  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:36.747784  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:38.872980  174943 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.125158429s)
	I0814 09:39:39.748032  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:42.922314  174943 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.174250662s)
	I0814 09:39:43.247755  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:43.747397  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:44.247035  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:44.747238  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:45.247088  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:45.747687  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:46.247881  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:46.747484  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:47.247039  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:47.747143  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:48.247746  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:48.313676  174943 kubeadm.go:985] duration metric: took 14.273486803s to wait for elevateKubeSystemPrivileges.
	I0814 09:39:48.313705  174943 kubeadm.go:392] StartCluster complete in 26.788521087s
	I0814 09:39:48.313722  174943 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:48.313812  174943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:39:48.315397  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:48.831656  174943 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210814093902-6746" rescaled to 1
	I0814 09:39:48.831704  174943 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0814 09:39:48.834633  174943 out.go:177] * Verifying Kubernetes components...
	I0814 09:39:48.834687  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:39:48.831754  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:39:48.831781  174943 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0814 09:39:48.834820  174943 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.834840  174943 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.831926  174943 config.go:177] Loaded profile config "old-k8s-version-20210814093902-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W0814 09:39:48.834850  174943 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:39:48.834878  174943 host.go:66] Checking if "old-k8s-version-20210814093902-6746" exists ...
	I0814 09:39:48.834886  174943 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.834907  174943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.835225  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:48.835399  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:48.888396  174943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:39:48.888559  174943 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:39:48.888577  174943 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:39:48.888634  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:48.903387  174943 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210814093902-6746"
	W0814 09:39:48.903420  174943 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:39:48.903452  174943 host.go:66] Checking if "old-k8s-version-20210814093902-6746" exists ...
	I0814 09:39:48.904037  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:48.937455  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:48.938442  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:39:48.940293  174943 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210814093902-6746" to be "Ready" ...
	I0814 09:39:48.954690  174943 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:39:48.954717  174943 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:39:48.954770  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:48.993028  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:49.121367  174943 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:39:49.221250  174943 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:39:49.431129  174943 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0814 09:39:49.645229  174943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0814 09:39:49.645251  174943 addons.go:344] enableAddons completed in 813.480152ms
	I0814 09:39:50.947459  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:39:53.447590  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:39:55.947391  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:39:58.447510  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:00.947417  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:02.947492  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:05.446550  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:07.446706  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:09.447298  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:11.946817  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:14.447458  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:16.947041  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:19.447186  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:21.946642  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:24.447360  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:26.946467  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:28.947685  174943 node_ready.go:49] node "old-k8s-version-20210814093902-6746" has status "Ready":"True"
	I0814 09:40:28.947712  174943 node_ready.go:38] duration metric: took 40.007394634s waiting for node "old-k8s-version-20210814093902-6746" to be "Ready" ...
	I0814 09:40:28.947724  174943 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:40:28.955933  174943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:30.968391  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:33.467728  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:35.968424  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:38.467449  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:40.967143  174943 pod_ready.go:92] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"True"
	I0814 09:40:40.967172  174943 pod_ready.go:81] duration metric: took 12.011209905s waiting for pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:40.967188  174943 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnmq2" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:40.970639  174943 pod_ready.go:92] pod "kube-proxy-xnmq2" in "kube-system" namespace has status "Ready":"True"
	I0814 09:40:40.970656  174943 pod_ready.go:81] duration metric: took 3.451668ms waiting for pod "kube-proxy-xnmq2" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:40.970667  174943 pod_ready.go:38] duration metric: took 12.022929345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:40:40.970690  174943 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:40:40.970738  174943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:40:40.989371  174943 api_server.go:70] duration metric: took 52.157635575s to wait for apiserver process to appear ...
	I0814 09:40:40.989389  174943 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:40:40.989397  174943 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:40:40.994102  174943 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:40:40.994718  174943 api_server.go:139] control plane version: v1.14.0
	I0814 09:40:40.994735  174943 api_server.go:129] duration metric: took 5.341752ms to wait for apiserver health ...
	I0814 09:40:40.994742  174943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:40:40.997346  174943 system_pods.go:59] 5 kube-system pods found
	I0814 09:40:40.997365  174943 system_pods.go:61] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997370  174943 system_pods.go:61] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997374  174943 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997377  174943 system_pods.go:61] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997381  174943 system_pods.go:61] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997385  174943 system_pods.go:74] duration metric: took 2.638722ms to wait for pod list to return data ...
	I0814 09:40:40.997395  174943 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:40:40.999462  174943 default_sa.go:45] found service account: "default"
	I0814 09:40:40.999478  174943 default_sa.go:55] duration metric: took 2.078779ms for default service account to be created ...
	I0814 09:40:40.999484  174943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:40:41.001841  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:41.001858  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001864  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001868  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001872  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001877  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001899  174943 retry.go:31] will retry after 305.063636ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:41.310417  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:41.310448  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310456  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310462  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310482  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310490  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310507  174943 retry.go:31] will retry after 338.212508ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:41.651868  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:41.651891  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651896  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651904  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651910  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651915  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651932  174943 retry.go:31] will retry after 378.459802ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:42.034725  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:42.034751  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034757  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034761  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034765  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034768  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034782  174943 retry.go:31] will retry after 469.882201ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:42.508622  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:42.508652  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508659  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508666  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508672  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508679  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508695  174943 retry.go:31] will retry after 667.365439ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:43.180084  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:43.180112  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180118  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180122  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180127  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180131  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180146  174943 retry.go:31] will retry after 597.243124ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:43.781014  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:43.781038  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781043  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781047  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781051  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781055  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781067  174943 retry.go:31] will retry after 789.889932ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:44.575280  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:44.575310  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575318  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575325  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575331  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575339  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575358  174943 retry.go:31] will retry after 951.868007ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:45.530651  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:45.530677  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530689  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530698  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530703  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530706  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530720  174943 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:46.876109  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:46.876135  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876142  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876149  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876155  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876160  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876177  174943 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:48.756938  174943 system_pods.go:86] 7 kube-system pods found
	I0814 09:40:48.756965  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.756973  174943 system_pods.go:89] "etcd-old-k8s-version-20210814093902-6746" [b3f06cb8-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:48.756981  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.756988  174943 system_pods.go:89] "kube-apiserver-old-k8s-version-20210814093902-6746" [b48904a1-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:48.756996  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.757002  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.757008  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.757026  174943 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:51.454433  174943 system_pods.go:86] 7 kube-system pods found
	I0814 09:40:51.454463  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454471  174943 system_pods.go:89] "etcd-old-k8s-version-20210814093902-6746" [b3f06cb8-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:51.454478  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454486  174943 system_pods.go:89] "kube-apiserver-old-k8s-version-20210814093902-6746" [b48904a1-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:51.454499  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454505  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454512  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454529  174943 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver, kube-scheduler
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a747d02c26253       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   0eed254b3316c
	ef9cd508c4bcf       296a6d5035e2d       4 minutes ago       Running             coredns                   0                   79704c1ba1377
	9753722af7745       6de166512aa22       4 minutes ago       Running             kindnet-cni               0                   e9f1ed022aae0
	66b515b3e4a14       adb2816ea823a       4 minutes ago       Running             kube-proxy                0                   63ba7b0ef4459
	0fcd2105780a3       bc2bb319a7038       4 minutes ago       Running             kube-controller-manager   0                   74d460f2e7a7f
	8bcc07d573eb1       0369cf4303ffd       4 minutes ago       Running             etcd                      0                   60a80199b4a57
	d3bf648d26067       6be0dc1302e30       4 minutes ago       Running             kube-scheduler            0                   faadff72e3a9c
	ab29adb23277d       3d174f00aa39e       4 minutes ago       Running             kube-apiserver            0                   7b9c957209d40
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:40:55 UTC. --
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.541959125Z" level=info msg="Connect containerd service"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542013371Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542632511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542708189Z" level=info msg="Start subscribing containerd event"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542770137Z" level=info msg="Start recovering state"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542857642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542918689Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542967809Z" level=info msg="containerd successfully booted in 0.040983s"
	Aug 14 09:36:58 pause-20210814093545-6746 systemd[1]: Started containerd container runtime.
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625754612Z" level=info msg="Start event monitor"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625793205Z" level=info msg="Start snapshots syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625802136Z" level=info msg="Start cni network conf syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625807599Z" level=info msg="Start streaming server"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.008705018Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.026044573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 pid=2510
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.163827167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.166278582Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.228933904Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.229330725Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.371077991Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\" returns successfully"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449187715Z" level=info msg="Finish piping stderr of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449222888Z" level=info msg="Finish piping stdout of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.450533507Z" level=info msg="TaskExit event &TaskExit{ContainerID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,Pid:2562,ExitStatus:255,ExitedAt:2021-08-14 09:37:32.450264852 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501408095Z" level=info msg="shim disconnected" id=a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501502681Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210814093545-6746
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210814093545-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=pause-20210814093545-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_36_29_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:36:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210814093545-6746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Aug 2021 09:37:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210814093545-6746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                0a58e276-bec2-4249-8c1b-588583c789f0
	  Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-7njgj                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m25s
	  kube-system                 etcd-pause-20210814093545-6746                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m33s
	  kube-system                 kindnet-tbw9g                                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m26s
	  kube-system                 kube-apiserver-pause-20210814093545-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-controller-manager-pause-20210814093545-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-proxy-zgc2h                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-pause-20210814093545-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m34s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s  kubelet     Node pause-20210814093545-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s  kubelet     Node pause-20210814093545-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s  kubelet     Node pause-20210814093545-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m34s  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m24s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m12s  kubelet     Node pause-20210814093545-6746 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug14 09:29] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth38d0eb85
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a bd 7c 39 49 62 08 06        ........|9Ib..
	[Aug14 09:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:32] cgroup: cgroup2: unknown option "nsdelegate"
	[ +13.411048] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.035402] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:33] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.451942] cgroup: cgroup2: unknown option "nsdelegate"
	[ +14.641136] tee (136175): /proc/134359/oom_adj is deprecated, please use /proc/134359/oom_score_adj instead.
	[Aug14 09:34] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.573195] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe29e5784
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 4c 1a e2 69 4b 08 06        .......L..iK..
	[  +8.954711] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:35] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth529d8992
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 22 4f ef 2e 27 f0 08 06        ......"O..'...
	[  +9.430011] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:36] cgroup: cgroup2: unknown option "nsdelegate"
	[ +36.823390] cgroup: cgroup2: unknown option "nsdelegate"
	[ +15.237179] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth43e4fc69
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 7b 35 3d 7d 88 08 06        .......{5=}...
	[Aug14 09:37] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:38] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd8221cd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 44 cc a6 70 5e 08 06        .......D..p^..
	
	* 
	* ==> etcd [8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed] <==
	* 2021-08-14 09:40:48.820397 I | embed: rejected connection from "127.0.0.1:34826" (error "write tcp 127.0.0.1:2379->127.0.0.1:34826: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.820524 I | embed: rejected connection from "127.0.0.1:34788" (error "write tcp 127.0.0.1:2379->127.0.0.1:34788: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.820586 I | embed: rejected connection from "127.0.0.1:34792" (error "write tcp 127.0.0.1:2379->127.0.0.1:34792: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.820858 I | embed: rejected connection from "127.0.0.1:34776" (error "write tcp 127.0.0.1:2379->127.0.0.1:34776: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.821003 I | embed: rejected connection from "127.0.0.1:34784" (error "write tcp 127.0.0.1:2379->127.0.0.1:34784: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.821113 I | embed: rejected connection from "127.0.0.1:34758" (error "write tcp 127.0.0.1:2379->127.0.0.1:34758: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.821306 I | embed: rejected connection from "127.0.0.1:34814" (error "write tcp 127.0.0.1:2379->127.0.0.1:34814: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822003 I | embed: rejected connection from "127.0.0.1:34794" (error "write tcp 127.0.0.1:2379->127.0.0.1:34794: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822033 I | embed: rejected connection from "127.0.0.1:34804" (error "write tcp 127.0.0.1:2379->127.0.0.1:34804: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822098 I | embed: rejected connection from "127.0.0.1:34816" (error "write tcp 127.0.0.1:2379->127.0.0.1:34816: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822334 I | embed: rejected connection from "127.0.0.1:34820" (error "write tcp 127.0.0.1:2379->127.0.0.1:34820: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822376 I | embed: rejected connection from "127.0.0.1:34774" (error "write tcp 127.0.0.1:2379->127.0.0.1:34774: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.823267 I | embed: rejected connection from "127.0.0.1:34822" (error "write tcp 127.0.0.1:2379->127.0.0.1:34822: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.823457 I | embed: rejected connection from "127.0.0.1:34288" (error "write tcp 127.0.0.1:2379->127.0.0.1:34288: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.823572 I | embed: rejected connection from "127.0.0.1:34796" (error "write tcp 127.0.0.1:2379->127.0.0.1:34796: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.825638 I | embed: rejected connection from "127.0.0.1:34062" (error "write tcp 127.0.0.1:2379->127.0.0.1:34062: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826416 I | embed: rejected connection from "127.0.0.1:33842" (error "write tcp 127.0.0.1:2379->127.0.0.1:33842: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826506 I | embed: rejected connection from "127.0.0.1:34756" (error "write tcp 127.0.0.1:2379->127.0.0.1:34756: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826573 I | embed: rejected connection from "127.0.0.1:33860" (error "write tcp 127.0.0.1:2379->127.0.0.1:33860: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826983 I | embed: rejected connection from "127.0.0.1:34790" (error "write tcp 127.0.0.1:2379->127.0.0.1:34790: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.828627 I | embed: rejected connection from "127.0.0.1:33960" (error "write tcp 127.0.0.1:2379->127.0.0.1:33960: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.830733 I | embed: rejected connection from "127.0.0.1:33828" (error "write tcp 127.0.0.1:2379->127.0.0.1:33828: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.901389 I | embed: rejected connection from "127.0.0.1:34742" (error "write tcp 127.0.0.1:2379->127.0.0.1:34742: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.902345 I | embed: rejected connection from "127.0.0.1:33946" (error "write tcp 127.0.0.1:2379->127.0.0.1:33946: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.902626 I | embed: rejected connection from "127.0.0.1:33836" (error "write tcp 127.0.0.1:2379->127.0.0.1:33836: write: broken pipe", ServerName "")
	
	* 
	* ==> kernel <==
	*  09:41:01 up  1:23,  0 users,  load average: 0.68, 2.02, 1.72
	Linux pause-20210814093545-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086] <==
	* W0814 09:40:47.107086       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0814 09:40:47.945729       1 trace.go:205] Trace[165450648]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:39:47.945) (total time: 60000ms):
	Trace[165450648]: [1m0.000281407s] [1m0.000281407s] END
	E0814 09:40:47.945758       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:40:47.945812       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:40:47.947224       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:40:47.948315       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:40:47.949813       1 trace.go:205] Trace[87157100]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:39:47.945) (total time: 60004ms):
	Trace[87157100]: [1m0.004385913s] [1m0.004385913s] END
	W0814 09:40:48.718147       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:49.282760       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:51.460165       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0814 09:40:58.661909       1 trace.go:205] Trace[541385592]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:40:55.347) (total time: 3314ms):
	Trace[541385592]: [3.314452536s] [3.314452536s] END
	I0814 09:40:58.661925       1 trace.go:205] Trace[395163066]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:40:01.527) (total time: 57134ms):
	Trace[395163066]: [57.13448904s] [57.13448904s] END
	I0814 09:40:58.662278       1 trace.go:205] Trace[1975391147]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:40:01.527) (total time: 57134ms):
	Trace[1975391147]: ---"Listing from storage done" 57134ms (09:40:00.661)
	Trace[1975391147]: [57.134853739s] [57.134853739s] END
	I0814 09:40:58.662292       1 trace.go:205] Trace[216913828]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:40:55.347) (total time: 3314ms):
	Trace[216913828]: ---"Listing from storage done" 3314ms (09:40:00.661)
	Trace[216913828]: [3.314848669s] [3.314848669s] END
	I0814 09:41:01.321856       1 trace.go:205] Trace[622601983]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210814093545-6746,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:40:58.950) (total time: 2371ms):
	Trace[622601983]: ---"About to write a response" 2371ms (09:41:00.321)
	Trace[622601983]: [2.371407821s] [2.371407821s] END
	
	* 
	* ==> kube-controller-manager [0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216] <==
	* I0814 09:36:35.558636       1 shared_informer.go:247] Caches are synced for cronjob 
	I0814 09:36:35.594056       1 shared_informer.go:247] Caches are synced for disruption 
	I0814 09:36:35.594080       1 disruption.go:371] Sending events to api server.
	I0814 09:36:35.618269       1 shared_informer.go:247] Caches are synced for attach detach 
	I0814 09:36:35.626411       1 shared_informer.go:247] Caches are synced for PV protection 
	I0814 09:36:35.658052       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0814 09:36:35.658104       1 shared_informer.go:247] Caches are synced for expand 
	I0814 09:36:35.666214       1 shared_informer.go:247] Caches are synced for endpoint 
	I0814 09:36:35.666883       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tbw9g"
	I0814 09:36:35.670094       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zgc2h"
	I0814 09:36:35.736985       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0814 09:36:35.758656       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0814 09:36:35.758886       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0814 09:36:35.767757       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.807894       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0814 09:36:35.810106       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.813582       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0814 09:36:36.206921       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.206946       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:36:36.237226       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.328706       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0814 09:36:36.413536       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:36.418045       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7njgj"
	I0814 09:36:36.433569       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:50.510455       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97] <==
	* I0814 09:36:37.040930       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:36:37.040978       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:36:37.041009       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:36:37.135620       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:36:37.135697       1 server_others.go:212] Using iptables Proxier.
	I0814 09:36:37.135734       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:36:37.135764       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:36:37.136453       1 server.go:643] Version: v1.21.3
	I0814 09:36:37.137196       1 config.go:315] Starting service config controller
	I0814 09:36:37.138165       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:36:37.139739       1 config.go:224] Starting endpoint slice config controller
	I0814 09:36:37.139765       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:36:37.141550       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:36:37.142664       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:36:37.240414       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:36:37.240445       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119] <==
	* I0814 09:36:19.239734       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.239780       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.240100       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:36:19.240128       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:36:19.310200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:19.310391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:36:19.310488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.310570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:36:19.310642       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:36:19.310718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:36:19.310788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:19.310860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:36:19.310940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:19.311017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:36:19.312650       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:36:20.163865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.194900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:20.263532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.308670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:20.382950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:20.414192       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0814 09:36:23.340697       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:41:01 UTC. --
	Aug 14 09:40:49 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:49.273417    3866 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036536    3866 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036818    3866 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036870    3866 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036899    3866 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036911    3866 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036919    3866 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037104    3866 remote_runtime.go:62] parsed scheme: ""
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037112    3866 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037144    3866 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037152    3866 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037200    3866 remote_image.go:50] parsed scheme: ""
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037205    3866 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037212    3866 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037216    3866 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037281    3866 kubelet.go:404] "Attempting to sync node with API server"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037309    3866 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037374    3866 kubelet.go:283] "Adding apiserver pod source"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037394    3866 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.038756    3866 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: E0814 09:40:54.305472    3866 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.306013    3866 server.go:1190] "Started kubelet"
	Aug 14 09:40:54 pause-20210814093545-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:40:54 pause-20210814093545-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 154 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00013b210, 0xc000000002)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00013b200)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00052a720, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000140f00, 0x18e5530, 0xc00051d0c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000362440)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000362440, 0x18b3d60, 0xc000708690, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000362440, 0x3b9aca00, 0x0, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000362440, 0x3b9aca00, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210814093545-6746 -n pause-20210814093545-6746
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210814093545-6746 -n pause-20210814093545-6746: exit status 2 (323.044759ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210814093545-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210814093545-6746 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210814093545-6746 describe pod : exit status 1 (49.574002ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210814093545-6746 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210814093545-6746
helpers_test.go:236: (dbg) docker inspect pause-20210814093545-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c",
	        "Created": "2021-08-14T09:35:47.328510764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:35:47.788540698Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hostname",
	        "HostsPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/hosts",
	        "LogPath": "/var/lib/docker/containers/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c/348c3cd444d991a3aff2e731a2f8e86762e7531b4f22db70254d290f0ebac53c-json.log",
	        "Name": "/pause-20210814093545-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210814093545-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210814093545-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/28eea91dda2212b6278c684a0f6bc4bc909fb77e744b7014f3a952feb98397ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210814093545-6746",
	                "Source": "/var/lib/docker/volumes/pause-20210814093545-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210814093545-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "name.minikube.sigs.k8s.io": "pause-20210814093545-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ccc2af153ef9d917059bc8c4f07b140ac515f4a831ba1bf6c90b0246a3c1997",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1ccc2af153ef",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210814093545-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "348c3cd444d9"
	                    ],
	                    "NetworkID": "d1c345d3493c76f3a399eb72a44a3805f583371e015cb9c75f513d1b9430742c",
	                    "EndpointID": "7f43385a3cfb69f0364951734129c7173a9f54c3b30297f57443926db80f5d72",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210814093545-6746 -n pause-20210814093545-6746: exit status 2 (315.487252ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210814093545-6746 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                     | kubernetes-upgrade-20210814093232-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:33:38 UTC |
	|         | kubernetes-upgrade-20210814093232-6746 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210814093232-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:38 UTC | Sat, 14 Aug 2021 09:33:59 UTC |
	|         | kubernetes-upgrade-20210814093232-6746 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210814093232-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:32:32 UTC | Sat, 14 Aug 2021 09:34:08 UTC |
	|         | offline-containerd-20210814093232-6746 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=docker            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210814093232-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:34:08 UTC | Sat, 14 Aug 2021 09:34:11 UTC |
	|         | offline-containerd-20210814093232-6746 |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210814093232-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:33:59 UTC | Sat, 14 Aug 2021 09:35:00 UTC |
	|         | kubernetes-upgrade-20210814093232-6746 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0      |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210814093232-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:01 UTC | Sat, 14 Aug 2021 09:35:42 UTC |
	|         | kubernetes-upgrade-20210814093232-6746 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0      |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	| delete  | -p                                     | kubernetes-upgrade-20210814093232-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:35:45 UTC |
	|         | kubernetes-upgrade-20210814093232-6746 |                                        |         |         |                               |                               |
	| start   | -p                                     | missing-upgrade-20210814093411-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:42 UTC | Sat, 14 Aug 2021 09:36:31 UTC |
	|         | missing-upgrade-20210814093411-6746    |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | missing-upgrade-20210814093411-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:31 UTC | Sat, 14 Aug 2021 09:36:34 UTC |
	|         | missing-upgrade-20210814093411-6746    |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210814093634-6746         | kubenet-20210814093634-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:34 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p flannel-20210814093635-6746         | flannel-20210814093635-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:35 UTC |
	| delete  | -p false-20210814093635-6746           | false-20210814093635-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:35 UTC | Sat, 14 Aug 2021 09:36:36 UTC |
	| start   | -p pause-20210814093545-6746           | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:35:45 UTC | Sat, 14 Aug 2021 09:36:56 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=docker             |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p pause-20210814093545-6746           | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:56 UTC | Sat, 14 Aug 2021 09:37:18 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:36 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | force-systemd-flag-20210814093636-6746 |                                        |         |         |                               |                               |
	|         | --memory=2048 --force-systemd          |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	| -p      | force-systemd-flag-20210814093636-6746 | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:28 UTC |
	|         | force-systemd-flag-20210814093636-6746 |                                        |         |         |                               |                               |
	| start   | -p                                     | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:28 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | force-systemd-env-20210814093728-6746  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210814093728-6746  | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:15 UTC |
	|         | force-systemd-env-20210814093728-6746  |                                        |         |         |                               |                               |
	| start   | -p                                     | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:15 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | cert-options-20210814093815-6746       |                                        |         |         |                               |                               |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1              |                                        |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15          |                                        |         |         |                               |                               |
	|         | --apiserver-names=localhost            |                                        |         |         |                               |                               |
	|         | --apiserver-names=www.google.com       |                                        |         |         |                               |                               |
	|         | --apiserver-port=8555                  |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | cert-options-20210814093815-6746       | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | ssh openssl x509 -text -noout -in      |                                        |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |         |         |                               |                               |
	| delete  | -p                                     | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:39:02 UTC |
	|         | cert-options-20210814093815-6746       |                                        |         |         |                               |                               |
	| unpause | -p pause-20210814093545-6746           | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:48 UTC | Sat, 14 Aug 2021 09:40:48 UTC |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	| -p      | pause-20210814093545-6746 logs         | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:54 UTC | Sat, 14 Aug 2021 09:41:01 UTC |
	|         | -n 25                                  |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:39:02
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:39:02.663117  174943 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:39:02.663193  174943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:39:02.663215  174943 out.go:311] Setting ErrFile to fd 2...
	I0814 09:39:02.663219  174943 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:39:02.663297  174943 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:39:02.663528  174943 out.go:305] Setting JSON to false
	I0814 09:39:02.698199  174943 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4905,"bootTime":1628929038,"procs":253,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:39:02.698265  174943 start.go:121] virtualization: kvm guest
	I0814 09:39:02.701044  174943 out.go:177] * [old-k8s-version-20210814093902-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:39:02.702691  174943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:39:02.701175  174943 notify.go:169] Checking for updates...
	I0814 09:39:02.704326  174943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:39:02.705770  174943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:39:02.707156  174943 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:39:02.707588  174943 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:39:02.707661  174943 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:39:02.707716  174943 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:39:02.707743  174943 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:39:02.757772  174943 docker.go:132] docker version: linux-19.03.15
	I0814 09:39:02.757846  174943 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:39:02.834009  174943 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:39:02.792005104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:39:02.834125  174943 docker.go:244] overlay module found
	I0814 09:39:02.836097  174943 out.go:177] * Using the docker driver based on user configuration
	I0814 09:39:02.836120  174943 start.go:278] selected driver: docker
	I0814 09:39:02.836125  174943 start.go:751] validating driver "docker" against <nil>
	I0814 09:39:02.836141  174943 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:39:02.836197  174943 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:39:02.836214  174943 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:39:02.837730  174943 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:39:02.838481  174943 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:39:02.915948  174943 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:39:02.872598918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:39:02.916078  174943 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:39:02.916214  174943 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:39:02.916233  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:02.916238  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:02.916244  174943 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:39:02.916251  174943 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:39:02.916255  174943 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:39:02.916263  174943 start_flags.go:277] config:
	{Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:39:02.918359  174943 out.go:177] * Starting control plane node old-k8s-version-20210814093902-6746 in cluster old-k8s-version-20210814093902-6746
	I0814 09:39:02.918400  174943 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:39:02.919787  174943 out.go:177] * Pulling base image ...
	I0814 09:39:02.919820  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:02.919847  174943 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:39:02.919872  174943 cache.go:56] Caching tarball of preloaded images
	I0814 09:39:02.919916  174943 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:39:02.920015  174943 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:39:02.920031  174943 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on containerd
	I0814 09:39:02.920134  174943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json ...
	I0814 09:39:02.920153  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json: {Name:mk04a532e2ac4420a6fb8880a4de59459858f2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:02.993067  174943 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:39:02.993094  174943 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:39:02.993112  174943 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:39:02.993157  174943 start.go:313] acquiring machines lock for old-k8s-version-20210814093902-6746: {Name:mk8e2fe5e854673f5d1990fabb56ddd331c139dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:39:02.993284  174943 start.go:317] acquired machines lock for "old-k8s-version-20210814093902-6746" in 106.703µs
	I0814 09:39:02.993313  174943 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0814 09:39:02.993400  174943 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:39:02.995534  174943 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0814 09:39:02.995772  174943 start.go:160] libmachine.API.Create for "old-k8s-version-20210814093902-6746" (driver="docker")
	I0814 09:39:02.995827  174943 client.go:168] LocalClient.Create starting
	I0814 09:39:02.995889  174943 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:39:02.995931  174943 main.go:130] libmachine: Decoding PEM data...
	I0814 09:39:02.995957  174943 main.go:130] libmachine: Parsing certificate...
	I0814 09:39:02.996078  174943 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:39:02.996106  174943 main.go:130] libmachine: Decoding PEM data...
	I0814 09:39:02.996128  174943 main.go:130] libmachine: Parsing certificate...
	I0814 09:39:02.996510  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:39:03.033171  174943 cli_runner.go:162] docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:39:03.033249  174943 network_create.go:255] running [docker network inspect old-k8s-version-20210814093902-6746] to gather additional debugging logs...
	I0814 09:39:03.033270  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746
	W0814 09:39:03.069356  174943 cli_runner.go:162] docker network inspect old-k8s-version-20210814093902-6746 returned with exit code 1
	I0814 09:39:03.069384  174943 network_create.go:258] error running [docker network inspect old-k8s-version-20210814093902-6746]: docker network inspect old-k8s-version-20210814093902-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20210814093902-6746
	I0814 09:39:03.069405  174943 network_create.go:260] output of [docker network inspect old-k8s-version-20210814093902-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20210814093902-6746
	
	** /stderr **
	I0814 09:39:03.069444  174943 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:39:03.106608  174943 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-d1c345d3493c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:f5:52:80}}
	I0814 09:39:03.107937  174943 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000a10cb0] misses:0}
	I0814 09:39:03.107986  174943 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:39:03.108002  174943 network_create.go:106] attempt to create docker network old-k8s-version-20210814093902-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:39:03.108057  174943 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210814093902-6746
	I0814 09:39:03.177694  174943 network_create.go:90] docker network old-k8s-version-20210814093902-6746 192.168.58.0/24 created
	I0814 09:39:03.177723  174943 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20210814093902-6746" container
	I0814 09:39:03.177793  174943 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:39:03.215811  174943 cli_runner.go:115] Run: docker volume create old-k8s-version-20210814093902-6746 --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:39:03.253142  174943 oci.go:102] Successfully created a docker volume old-k8s-version-20210814093902-6746
	I0814 09:39:03.253222  174943 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210814093902-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --entrypoint /usr/bin/test -v old-k8s-version-20210814093902-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:39:04.056508  174943 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210814093902-6746
	W0814 09:39:04.056559  174943 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:39:04.056594  174943 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:39:04.056630  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:04.056654  174943 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:39:04.056655  174943 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:39:04.056713  174943 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210814093902-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:39:04.139991  174943 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210814093902-6746 --name old-k8s-version-20210814093902-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210814093902-6746 --network old-k8s-version-20210814093902-6746 --ip 192.168.58.2 --volume old-k8s-version-20210814093902-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:39:04.626452  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Running}}
	I0814 09:39:04.671458  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:04.714884  174943 cli_runner.go:115] Run: docker exec old-k8s-version-20210814093902-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:39:04.854255  174943 oci.go:278] the created container "old-k8s-version-20210814093902-6746" has a running status.
	I0814 09:39:04.854292  174943 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa...
	I0814 09:39:05.034841  174943 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:39:05.415414  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:05.458215  174943 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:39:05.458235  174943 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210814093902-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:39:07.973443  174943 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210814093902-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.916668349s)
	I0814 09:39:07.973474  174943 kic.go:188] duration metric: took 3.916818 seconds to extract preloaded images to volume
	I0814 09:39:07.973538  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:08.011255  174943 machine.go:88] provisioning docker machine ...
	I0814 09:39:08.011292  174943 ubuntu.go:169] provisioning hostname "old-k8s-version-20210814093902-6746"
	I0814 09:39:08.011362  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.048765  174943 main.go:130] libmachine: Using SSH client type: native
	I0814 09:39:08.049003  174943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0814 09:39:08.049029  174943 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210814093902-6746 && echo "old-k8s-version-20210814093902-6746" | sudo tee /etc/hostname
	I0814 09:39:08.188179  174943 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210814093902-6746
	
	I0814 09:39:08.188248  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.227743  174943 main.go:130] libmachine: Using SSH client type: native
	I0814 09:39:08.227920  174943 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32923 <nil> <nil>}
	I0814 09:39:08.227955  174943 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210814093902-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210814093902-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210814093902-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:39:08.351978  174943 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:39:08.352004  174943 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:39:08.352038  174943 ubuntu.go:177] setting up certificates
	I0814 09:39:08.352048  174943 provision.go:83] configureAuth start
	I0814 09:39:08.352093  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:08.390983  174943 provision.go:138] copyHostCerts
	I0814 09:39:08.391046  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:39:08.391054  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:39:08.391107  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:39:08.391186  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:39:08.391201  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:39:08.391222  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:39:08.391269  174943 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:39:08.391276  174943 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:39:08.391293  174943 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:39:08.391328  174943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210814093902-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210814093902-6746]
	I0814 09:39:08.503869  174943 provision.go:172] copyRemoteCerts
	I0814 09:39:08.503920  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:39:08.503953  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.542791  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.632044  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:39:08.648011  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0814 09:39:08.663392  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:39:08.678374  174943 provision.go:86] duration metric: configureAuth took 326.31671ms
	I0814 09:39:08.678393  174943 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:39:08.678545  174943 config.go:177] Loaded profile config "old-k8s-version-20210814093902-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0814 09:39:08.678558  174943 machine.go:91] provisioned docker machine in 667.282302ms
	I0814 09:39:08.678567  174943 client.go:171] LocalClient.Create took 5.682731021s
	I0814 09:39:08.678591  174943 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210814093902-6746" took 5.682819008s
	I0814 09:39:08.678603  174943 start.go:267] post-start starting for "old-k8s-version-20210814093902-6746" (driver="docker")
	I0814 09:39:08.678611  174943 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:39:08.678659  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:39:08.678705  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.716852  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.803556  174943 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:39:08.806084  174943 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:39:08.806105  174943 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:39:08.806120  174943 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:39:08.806127  174943 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:39:08.806137  174943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:39:08.806181  174943 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:39:08.806271  174943 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:39:08.806379  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:39:08.812467  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:39:08.827995  174943 start.go:270] post-start completed in 149.382379ms
	I0814 09:39:08.828361  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:08.867577  174943 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/config.json ...
	I0814 09:39:08.867772  174943 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:39:08.867807  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:08.905591  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:08.993182  174943 start.go:129] duration metric: createHost completed in 5.999767131s
	I0814 09:39:08.993204  174943 start.go:80] releasing machines lock for "old-k8s-version-20210814093902-6746", held for 5.999907972s
	I0814 09:39:08.993277  174943 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210814093902-6746
	I0814 09:39:09.031752  174943 ssh_runner.go:149] Run: systemctl --version
	I0814 09:39:09.031796  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:09.031857  174943 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:39:09.031938  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:09.070624  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:09.073066  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:09.156561  174943 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:39:09.195165  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:39:09.204055  174943 docker.go:153] disabling docker service ...
	I0814 09:39:09.204109  174943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:39:09.218986  174943 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:39:09.227073  174943 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:39:09.292773  174943 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:39:09.350837  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:39:09.359574  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:39:09.371511  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiC
iAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:39:09.383254  174943 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:39:09.388925  174943 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:39:09.388993  174943 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:39:09.395292  174943 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:39:09.401003  174943 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:39:09.455625  174943 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:39:09.517464  174943 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:39:09.517527  174943 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:39:09.520948  174943 start.go:413] Will wait 60s for crictl version
	I0814 09:39:09.521000  174943 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:39:09.543774  174943 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:39:09Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:39:20.590544  174943 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:39:20.612712  174943 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:39:20.612770  174943 ssh_runner.go:149] Run: containerd --version
	I0814 09:39:20.634809  174943 ssh_runner.go:149] Run: containerd --version
	I0814 09:39:20.656509  174943 out.go:177] * Preparing Kubernetes v1.14.0 on containerd 1.4.9 ...
	I0814 09:39:20.656580  174943 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210814093902-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:39:20.693435  174943 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:39:20.696553  174943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:39:20.706069  174943 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:39:20.706115  174943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:39:20.727361  174943 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:39:20.727378  174943 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:39:20.727410  174943 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:39:20.747946  174943 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:39:20.747961  174943 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:39:20.748000  174943 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:39:20.768036  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:20.768056  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:20.768065  174943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:39:20.768078  174943 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210814093902-6746 NodeName:old-k8s-version-20210814093902-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:39:20.768186  174943 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20210814093902-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210814093902-6746
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:39:20.768270  174943 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20210814093902-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:39:20.768309  174943 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0814 09:39:20.774391  174943 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:39:20.774447  174943 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:39:20.780415  174943 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (652 bytes)
	I0814 09:39:20.791377  174943 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:39:20.802514  174943 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0814 09:39:20.813779  174943 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:39:20.816320  174943 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:39:20.824300  174943 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746 for IP: 192.168.58.2
	I0814 09:39:20.824348  174943 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:39:20.824369  174943 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:39:20.824424  174943 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key
	I0814 09:39:20.824434  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt with IP's: []
	I0814 09:39:21.018286  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt ...
	I0814 09:39:21.018312  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: {Name:mk4ac4b7e75286e8b79fd45038770d55847165f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.018501  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key ...
	I0814 09:39:21.018513  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.key: {Name:mkfcf32e4fa8803fcf65dae722ac7ab1f6cf1297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.018594  174943 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041
	I0814 09:39:21.018604  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:39:21.104538  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 ...
	I0814 09:39:21.104563  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041: {Name:mk874ad2629ebee729aa095bca20e1e9bf8bb4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.104717  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041 ...
	I0814 09:39:21.104730  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041: {Name:mk1727c0c17503796e9734870659a2a374768265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.104817  174943 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt
	I0814 09:39:21.104885  174943 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key
	I0814 09:39:21.104937  174943 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key
	I0814 09:39:21.104946  174943 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt with IP's: []
	I0814 09:39:21.258449  174943 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt ...
	I0814 09:39:21.258494  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt: {Name:mk2f05e5d279ba24cfb466b9815a114a874915fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.258659  174943 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key ...
	I0814 09:39:21.258672  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key: {Name:mk44a042ae30bbae120b4e022a4a2b2eb4d2ee31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:21.258831  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:39:21.258867  174943 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:39:21.258878  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:39:21.258899  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:39:21.258922  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:39:21.258944  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:39:21.258985  174943 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:39:21.259945  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:39:21.276890  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 09:39:21.316234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:39:21.331459  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:39:21.346701  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:39:21.361781  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:39:21.376654  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:39:21.391554  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:39:21.406234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:39:21.421234  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:39:21.436252  174943 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:39:21.451359  174943 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:39:21.462306  174943 ssh_runner.go:149] Run: openssl version
	I0814 09:39:21.466604  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:39:21.472871  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.475534  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.475567  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:39:21.479760  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:39:21.486094  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:39:21.492443  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.495062  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.495096  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:39:21.499238  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:39:21.505598  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:39:21.511951  174943 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.514698  174943 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.514734  174943 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:39:21.518872  174943 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:39:21.525190  174943 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210814093902-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210814093902-6746 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:39:21.525270  174943 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:39:21.525302  174943 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:39:21.549326  174943 cri.go:76] found id: ""
	I0814 09:39:21.549383  174943 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:39:21.555843  174943 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:39:21.562038  174943 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:39:21.562091  174943 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:39:21.567941  174943 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:39:21.567973  174943 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:39:21.866263  174943 out.go:204]   - Generating certificates and keys ...
	I0814 09:39:23.763238  174943 out.go:204]   - Booting up control plane ...
	I0814 09:39:33.306959  174943 out.go:204]   - Configuring RBAC rules ...
	I0814 09:39:33.721817  174943 cni.go:93] Creating CNI manager for ""
	I0814 09:39:33.721846  174943 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:39:33.723323  174943 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:39:33.723409  174943 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:39:33.726795  174943 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0814 09:39:33.726810  174943 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:39:33.738760  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:39:34.040132  174943 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:39:34.040255  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.040255  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=old-k8s-version-20210814093902-6746 minikube.k8s.io/updated_at=2021_08_14T09_39_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.055567  174943 ops.go:34] apiserver oom_adj: 16
	I0814 09:39:34.055587  174943 ops.go:39] adjusting apiserver oom_adj to -10
	I0814 09:39:34.055600  174943 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:39:34.148540  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:34.747637  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:35.247207  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:35.747299  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:36.247652  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:36.747784  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:38.872980  174943 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.125158429s)
	I0814 09:39:39.748032  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:42.922314  174943 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.174250662s)
	I0814 09:39:43.247755  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:43.747397  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:44.247035  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:44.747238  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:45.247088  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:45.747687  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:46.247881  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:46.747484  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:47.247039  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:47.747143  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:48.247746  174943 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:39:48.313676  174943 kubeadm.go:985] duration metric: took 14.273486803s to wait for elevateKubeSystemPrivileges.
	I0814 09:39:48.313705  174943 kubeadm.go:392] StartCluster complete in 26.788521087s
	I0814 09:39:48.313722  174943 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:48.313812  174943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:39:48.315397  174943 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:39:48.831656  174943 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210814093902-6746" rescaled to 1
	I0814 09:39:48.831704  174943 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0814 09:39:48.834633  174943 out.go:177] * Verifying Kubernetes components...
	I0814 09:39:48.834687  174943 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:39:48.831754  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:39:48.831781  174943 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0814 09:39:48.834820  174943 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.834840  174943 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.831926  174943 config.go:177] Loaded profile config "old-k8s-version-20210814093902-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W0814 09:39:48.834850  174943 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:39:48.834878  174943 host.go:66] Checking if "old-k8s-version-20210814093902-6746" exists ...
	I0814 09:39:48.834886  174943 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.834907  174943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210814093902-6746"
	I0814 09:39:48.835225  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:48.835399  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:48.888396  174943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:39:48.888559  174943 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:39:48.888577  174943 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:39:48.888634  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:48.903387  174943 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210814093902-6746"
	W0814 09:39:48.903420  174943 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:39:48.903452  174943 host.go:66] Checking if "old-k8s-version-20210814093902-6746" exists ...
	I0814 09:39:48.904037  174943 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:39:48.937455  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:48.938442  174943 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:39:48.940293  174943 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210814093902-6746" to be "Ready" ...
	I0814 09:39:48.954690  174943 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:39:48.954717  174943 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:39:48.954770  174943 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:39:48.993028  174943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32923 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:39:49.121367  174943 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:39:49.221250  174943 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:39:49.431129  174943 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0814 09:39:49.645229  174943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0814 09:39:49.645251  174943 addons.go:344] enableAddons completed in 813.480152ms
	I0814 09:39:50.947459  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:39:53.447590  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:39:55.947391  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:39:58.447510  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:00.947417  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:02.947492  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:05.446550  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:07.446706  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:09.447298  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:11.946817  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:14.447458  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:16.947041  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:19.447186  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:21.946642  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:24.447360  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:26.946467  174943 node_ready.go:58] node "old-k8s-version-20210814093902-6746" has status "Ready":"False"
	I0814 09:40:28.947685  174943 node_ready.go:49] node "old-k8s-version-20210814093902-6746" has status "Ready":"True"
	I0814 09:40:28.947712  174943 node_ready.go:38] duration metric: took 40.007394634s waiting for node "old-k8s-version-20210814093902-6746" to be "Ready" ...
	I0814 09:40:28.947724  174943 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:40:28.955933  174943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:30.968391  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:33.467728  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:35.968424  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:38.467449  174943 pod_ready.go:102] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"False"
	I0814 09:40:40.967143  174943 pod_ready.go:92] pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace has status "Ready":"True"
	I0814 09:40:40.967172  174943 pod_ready.go:81] duration metric: took 12.011209905s waiting for pod "coredns-fb8b8dccf-nfccv" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:40.967188  174943 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xnmq2" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:40.970639  174943 pod_ready.go:92] pod "kube-proxy-xnmq2" in "kube-system" namespace has status "Ready":"True"
	I0814 09:40:40.970656  174943 pod_ready.go:81] duration metric: took 3.451668ms waiting for pod "kube-proxy-xnmq2" in "kube-system" namespace to be "Ready" ...
	I0814 09:40:40.970667  174943 pod_ready.go:38] duration metric: took 12.022929345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:40:40.970690  174943 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:40:40.970738  174943 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:40:40.989371  174943 api_server.go:70] duration metric: took 52.157635575s to wait for apiserver process to appear ...
	I0814 09:40:40.989389  174943 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:40:40.989397  174943 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:40:40.994102  174943 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:40:40.994718  174943 api_server.go:139] control plane version: v1.14.0
	I0814 09:40:40.994735  174943 api_server.go:129] duration metric: took 5.341752ms to wait for apiserver health ...
	I0814 09:40:40.994742  174943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:40:40.997346  174943 system_pods.go:59] 5 kube-system pods found
	I0814 09:40:40.997365  174943 system_pods.go:61] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997370  174943 system_pods.go:61] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997374  174943 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997377  174943 system_pods.go:61] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997381  174943 system_pods.go:61] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:40.997385  174943 system_pods.go:74] duration metric: took 2.638722ms to wait for pod list to return data ...
	I0814 09:40:40.997395  174943 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:40:40.999462  174943 default_sa.go:45] found service account: "default"
	I0814 09:40:40.999478  174943 default_sa.go:55] duration metric: took 2.078779ms for default service account to be created ...
	I0814 09:40:40.999484  174943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:40:41.001841  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:41.001858  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001864  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001868  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001872  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001877  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.001899  174943 retry.go:31] will retry after 305.063636ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:41.310417  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:41.310448  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310456  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310462  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310482  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310490  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.310507  174943 retry.go:31] will retry after 338.212508ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:41.651868  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:41.651891  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651896  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651904  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651910  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651915  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:41.651932  174943 retry.go:31] will retry after 378.459802ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:42.034725  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:42.034751  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034757  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034761  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034765  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034768  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.034782  174943 retry.go:31] will retry after 469.882201ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:42.508622  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:42.508652  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508659  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508666  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508672  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508679  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:42.508695  174943 retry.go:31] will retry after 667.365439ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:43.180084  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:43.180112  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180118  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180122  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180127  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180131  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.180146  174943 retry.go:31] will retry after 597.243124ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:43.781014  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:43.781038  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781043  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781047  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781051  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781055  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:43.781067  174943 retry.go:31] will retry after 789.889932ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:44.575280  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:44.575310  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575318  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575325  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575331  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575339  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:44.575358  174943 retry.go:31] will retry after 951.868007ms: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:45.530651  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:45.530677  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530689  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530698  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530703  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530706  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:45.530720  174943 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:46.876109  174943 system_pods.go:86] 5 kube-system pods found
	I0814 09:40:46.876135  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876142  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876149  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876155  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876160  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:46.876177  174943 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:48.756938  174943 system_pods.go:86] 7 kube-system pods found
	I0814 09:40:48.756965  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.756973  174943 system_pods.go:89] "etcd-old-k8s-version-20210814093902-6746" [b3f06cb8-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:48.756981  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.756988  174943 system_pods.go:89] "kube-apiserver-old-k8s-version-20210814093902-6746" [b48904a1-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:48.756996  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.757002  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.757008  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:48.757026  174943 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:51.454433  174943 system_pods.go:86] 7 kube-system pods found
	I0814 09:40:51.454463  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454471  174943 system_pods.go:89] "etcd-old-k8s-version-20210814093902-6746" [b3f06cb8-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:51.454478  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454486  174943 system_pods.go:89] "kube-apiserver-old-k8s-version-20210814093902-6746" [b48904a1-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:51.454499  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454505  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454512  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:51.454529  174943 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:53.953894  174943 system_pods.go:86] 7 kube-system pods found
	I0814 09:40:53.953922  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:53.953930  174943 system_pods.go:89] "etcd-old-k8s-version-20210814093902-6746" [b3f06cb8-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:53.953935  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:53.953941  174943 system_pods.go:89] "kube-apiserver-old-k8s-version-20210814093902-6746" [b48904a1-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:53.953947  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:53.953953  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:53.953958  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:53.953976  174943 retry.go:31] will retry after 3.420895489s: missing components: etcd, kube-apiserver, kube-scheduler
	I0814 09:40:57.403597  174943 system_pods.go:86] 8 kube-system pods found
	I0814 09:40:57.403628  174943 system_pods.go:89] "coredns-fb8b8dccf-nfccv" [90c2ae3f-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:57.403637  174943 system_pods.go:89] "etcd-old-k8s-version-20210814093902-6746" [b3f06cb8-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:57.403643  174943 system_pods.go:89] "kindnet-9rbws" [90e08770-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:57.403650  174943 system_pods.go:89] "kube-apiserver-old-k8s-version-20210814093902-6746" [b48904a1-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:57.403657  174943 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210814093902-6746" [ab986d04-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:57.403684  174943 system_pods.go:89] "kube-proxy-xnmq2" [90e06827-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:57.403691  174943 system_pods.go:89] "kube-scheduler-old-k8s-version-20210814093902-6746" [b81c92f1-fce3-11eb-977c-0242f298e734] Pending
	I0814 09:40:57.403700  174943 system_pods.go:89] "storage-provisioner" [91a7567d-fce3-11eb-977c-0242f298e734] Running
	I0814 09:40:57.403719  174943 retry.go:31] will retry after 4.133785681s: missing components: kube-scheduler
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a747d02c26253       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   0eed254b3316c
	ef9cd508c4bcf       296a6d5035e2d       4 minutes ago       Running             coredns                   0                   79704c1ba1377
	9753722af7745       6de166512aa22       4 minutes ago       Running             kindnet-cni               0                   e9f1ed022aae0
	66b515b3e4a14       adb2816ea823a       4 minutes ago       Running             kube-proxy                0                   63ba7b0ef4459
	0fcd2105780a3       bc2bb319a7038       4 minutes ago       Running             kube-controller-manager   0                   74d460f2e7a7f
	8bcc07d573eb1       0369cf4303ffd       4 minutes ago       Running             etcd                      0                   60a80199b4a57
	d3bf648d26067       6be0dc1302e30       4 minutes ago       Running             kube-scheduler            0                   faadff72e3a9c
	ab29adb23277d       3d174f00aa39e       4 minutes ago       Running             kube-apiserver            0                   7b9c957209d40
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:41:02 UTC. --
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.541959125Z" level=info msg="Connect containerd service"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542013371Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542632511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542708189Z" level=info msg="Start subscribing containerd event"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542770137Z" level=info msg="Start recovering state"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542857642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542918689Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.542967809Z" level=info msg="containerd successfully booted in 0.040983s"
	Aug 14 09:36:58 pause-20210814093545-6746 systemd[1]: Started containerd container runtime.
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625754612Z" level=info msg="Start event monitor"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625793205Z" level=info msg="Start snapshots syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625802136Z" level=info msg="Start cni network conf syncer"
	Aug 14 09:36:58 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:36:58.625807599Z" level=info msg="Start streaming server"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.008705018Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.026044573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22 pid=2510
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.163827167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:80eca970-b4ab-4ac8-af20-f814411672fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.166278582Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.228933904Z" level=info msg="CreateContainer within sandbox \"0eed254b3316ccafefbbdf18a3217373fe1a0df032e6ce403e5e9e56016e0a22\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.229330725Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:18 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:18.371077991Z" level=info msg="StartContainer for \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\" returns successfully"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449187715Z" level=info msg="Finish piping stderr of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.449222888Z" level=info msg="Finish piping stdout of container \"a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564\""
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.450533507Z" level=info msg="TaskExit event &TaskExit{ContainerID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,ID:a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564,Pid:2562,ExitStatus:255,ExitedAt:2021-08-14 09:37:32.450264852 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501408095Z" level=info msg="shim disconnected" id=a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564
	Aug 14 09:37:32 pause-20210814093545-6746 containerd[2196]: time="2021-08-14T09:37:32.501502681Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	
	* 
	* ==> coredns [ef9cd508c4bcf303b39008a4f028d3fc7323e1f97e16a46bf8f3b752322d9431] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210814093545-6746
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210814093545-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=pause-20210814093545-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_36_29_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:36:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210814093545-6746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Aug 2021 09:37:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:37:09 +0000   Sat, 14 Aug 2021 09:36:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210814093545-6746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                0a58e276-bec2-4249-8c1b-588583c789f0
	  Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-7njgj                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m27s
	  kube-system                 etcd-pause-20210814093545-6746                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m35s
	  kube-system                 kindnet-tbw9g                                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m28s
	  kube-system                 kube-apiserver-pause-20210814093545-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-controller-manager-pause-20210814093545-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-zgc2h                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-pause-20210814093545-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m36s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s  kubelet     Node pause-20210814093545-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s  kubelet     Node pause-20210814093545-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s  kubelet     Node pause-20210814093545-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m26s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m14s  kubelet     Node pause-20210814093545-6746 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug14 09:29] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth38d0eb85
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a bd 7c 39 49 62 08 06        ........|9Ib..
	[Aug14 09:30] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:32] cgroup: cgroup2: unknown option "nsdelegate"
	[ +13.411048] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.035402] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:33] cgroup: cgroup2: unknown option "nsdelegate"
	[  +1.451942] cgroup: cgroup2: unknown option "nsdelegate"
	[ +14.641136] tee (136175): /proc/134359/oom_adj is deprecated, please use /proc/134359/oom_score_adj instead.
	[Aug14 09:34] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.573195] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe29e5784
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 4c 1a e2 69 4b 08 06        .......L..iK..
	[  +8.954711] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:35] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth529d8992
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 22 4f ef 2e 27 f0 08 06        ......"O..'...
	[  +9.430011] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:36] cgroup: cgroup2: unknown option "nsdelegate"
	[ +36.823390] cgroup: cgroup2: unknown option "nsdelegate"
	[ +15.237179] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth43e4fc69
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 7b 35 3d 7d 88 08 06        .......{5=}...
	[Aug14 09:37] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:38] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:39] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd8221cd8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 44 cc a6 70 5e 08 06        .......D..p^..
	
	* 
	* ==> etcd [8bcc07d573eb17de988b4a7ff6a59d84fca52b4e31ffd84e54100a77cf5717ed] <==
	* 2021-08-14 09:40:48.820397 I | embed: rejected connection from "127.0.0.1:34826" (error "write tcp 127.0.0.1:2379->127.0.0.1:34826: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.820524 I | embed: rejected connection from "127.0.0.1:34788" (error "write tcp 127.0.0.1:2379->127.0.0.1:34788: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.820586 I | embed: rejected connection from "127.0.0.1:34792" (error "write tcp 127.0.0.1:2379->127.0.0.1:34792: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.820858 I | embed: rejected connection from "127.0.0.1:34776" (error "write tcp 127.0.0.1:2379->127.0.0.1:34776: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.821003 I | embed: rejected connection from "127.0.0.1:34784" (error "write tcp 127.0.0.1:2379->127.0.0.1:34784: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.821113 I | embed: rejected connection from "127.0.0.1:34758" (error "write tcp 127.0.0.1:2379->127.0.0.1:34758: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.821306 I | embed: rejected connection from "127.0.0.1:34814" (error "write tcp 127.0.0.1:2379->127.0.0.1:34814: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822003 I | embed: rejected connection from "127.0.0.1:34794" (error "write tcp 127.0.0.1:2379->127.0.0.1:34794: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822033 I | embed: rejected connection from "127.0.0.1:34804" (error "write tcp 127.0.0.1:2379->127.0.0.1:34804: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822098 I | embed: rejected connection from "127.0.0.1:34816" (error "write tcp 127.0.0.1:2379->127.0.0.1:34816: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822334 I | embed: rejected connection from "127.0.0.1:34820" (error "write tcp 127.0.0.1:2379->127.0.0.1:34820: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.822376 I | embed: rejected connection from "127.0.0.1:34774" (error "write tcp 127.0.0.1:2379->127.0.0.1:34774: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.823267 I | embed: rejected connection from "127.0.0.1:34822" (error "write tcp 127.0.0.1:2379->127.0.0.1:34822: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.823457 I | embed: rejected connection from "127.0.0.1:34288" (error "write tcp 127.0.0.1:2379->127.0.0.1:34288: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.823572 I | embed: rejected connection from "127.0.0.1:34796" (error "write tcp 127.0.0.1:2379->127.0.0.1:34796: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.825638 I | embed: rejected connection from "127.0.0.1:34062" (error "write tcp 127.0.0.1:2379->127.0.0.1:34062: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826416 I | embed: rejected connection from "127.0.0.1:33842" (error "write tcp 127.0.0.1:2379->127.0.0.1:33842: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826506 I | embed: rejected connection from "127.0.0.1:34756" (error "write tcp 127.0.0.1:2379->127.0.0.1:34756: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826573 I | embed: rejected connection from "127.0.0.1:33860" (error "write tcp 127.0.0.1:2379->127.0.0.1:33860: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.826983 I | embed: rejected connection from "127.0.0.1:34790" (error "write tcp 127.0.0.1:2379->127.0.0.1:34790: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.828627 I | embed: rejected connection from "127.0.0.1:33960" (error "write tcp 127.0.0.1:2379->127.0.0.1:33960: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.830733 I | embed: rejected connection from "127.0.0.1:33828" (error "write tcp 127.0.0.1:2379->127.0.0.1:33828: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.901389 I | embed: rejected connection from "127.0.0.1:34742" (error "write tcp 127.0.0.1:2379->127.0.0.1:34742: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.902345 I | embed: rejected connection from "127.0.0.1:33946" (error "write tcp 127.0.0.1:2379->127.0.0.1:33946: write: broken pipe", ServerName "")
	2021-08-14 09:40:48.902626 I | embed: rejected connection from "127.0.0.1:33836" (error "write tcp 127.0.0.1:2379->127.0.0.1:33836: write: broken pipe", ServerName "")
	
	* 
	* ==> kernel <==
	*  09:41:03 up  1:23,  0 users,  load average: 0.78, 2.02, 1.72
	Linux pause-20210814093545-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ab29adb23277d92f4f749c46d653ad2baa8f679bbee146d1beac8e5aab8ec086] <==
	* E0814 09:40:47.948315       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:40:47.949813       1 trace.go:205] Trace[87157100]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:39:47.945) (total time: 60004ms):
	Trace[87157100]: [1m0.004385913s] [1m0.004385913s] END
	W0814 09:40:48.718147       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:49.282760       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:40:51.460165       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0814 09:40:58.661909       1 trace.go:205] Trace[541385592]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:40:55.347) (total time: 3314ms):
	Trace[541385592]: [3.314452536s] [3.314452536s] END
	I0814 09:40:58.661925       1 trace.go:205] Trace[395163066]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:40:01.527) (total time: 57134ms):
	Trace[395163066]: [57.13448904s] [57.13448904s] END
	I0814 09:40:58.662278       1 trace.go:205] Trace[1975391147]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:40:01.527) (total time: 57134ms):
	Trace[1975391147]: ---"Listing from storage done" 57134ms (09:40:00.661)
	Trace[1975391147]: [57.134853739s] [57.134853739s] END
	I0814 09:40:58.662292       1 trace.go:205] Trace[216913828]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:40:55.347) (total time: 3314ms):
	Trace[216913828]: ---"Listing from storage done" 3314ms (09:40:00.661)
	Trace[216913828]: [3.314848669s] [3.314848669s] END
	I0814 09:41:01.321856       1 trace.go:205] Trace[622601983]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20210814093545-6746,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:40:58.950) (total time: 2371ms):
	Trace[622601983]: ---"About to write a response" 2371ms (09:41:00.321)
	Trace[622601983]: [2.371407821s] [2.371407821s] END
	I0814 09:41:02.434043       1 trace.go:205] Trace[878415839]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:40:29.362) (total time: 33071ms):
	Trace[878415839]: ---"About to write a response" 33071ms (09:41:00.433)
	Trace[878415839]: [33.071083741s] [33.071083741s] END
	I0814 09:41:02.434075       1 trace.go:205] Trace[605017891]: "Get" url:/api/v1/namespaces/kube-node-lease,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:40:39.357) (total time: 23076ms):
	Trace[605017891]: ---"About to write a response" 23076ms (09:41:00.434)
	Trace[605017891]: [23.076221721s] [23.076221721s] END
	
	* 
	* ==> kube-controller-manager [0fcd2105780a328964f9c30e4fc83c19689d1d0a6aac05dea8ef621aa6bb0216] <==
	* I0814 09:36:35.558636       1 shared_informer.go:247] Caches are synced for cronjob 
	I0814 09:36:35.594056       1 shared_informer.go:247] Caches are synced for disruption 
	I0814 09:36:35.594080       1 disruption.go:371] Sending events to api server.
	I0814 09:36:35.618269       1 shared_informer.go:247] Caches are synced for attach detach 
	I0814 09:36:35.626411       1 shared_informer.go:247] Caches are synced for PV protection 
	I0814 09:36:35.658052       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0814 09:36:35.658104       1 shared_informer.go:247] Caches are synced for expand 
	I0814 09:36:35.666214       1 shared_informer.go:247] Caches are synced for endpoint 
	I0814 09:36:35.666883       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tbw9g"
	I0814 09:36:35.670094       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zgc2h"
	I0814 09:36:35.736985       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0814 09:36:35.758656       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0814 09:36:35.758886       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0814 09:36:35.767757       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.807894       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0814 09:36:35.810106       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:36:35.813582       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0814 09:36:36.206921       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.206946       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:36:36.237226       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:36:36.328706       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0814 09:36:36.413536       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:36.418045       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-7njgj"
	I0814 09:36:36.433569       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-wm4hd"
	I0814 09:36:50.510455       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [66b515b3e4a14fa94b7c66bf716bbb6b1a292a0066cd3bd9aa09cd86441b0a97] <==
	* I0814 09:36:37.040930       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:36:37.040978       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:36:37.041009       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:36:37.135620       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:36:37.135697       1 server_others.go:212] Using iptables Proxier.
	I0814 09:36:37.135734       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:36:37.135764       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:36:37.136453       1 server.go:643] Version: v1.21.3
	I0814 09:36:37.137196       1 config.go:315] Starting service config controller
	I0814 09:36:37.138165       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:36:37.139739       1 config.go:224] Starting endpoint slice config controller
	I0814 09:36:37.139765       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:36:37.141550       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:36:37.142664       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:36:37.240414       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:36:37.240445       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d3bf648d2606793756e8ef2db2d5c4245808a066ff9ecdeb642221c67dd12119] <==
	* I0814 09:36:19.239734       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.239780       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:36:19.240100       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:36:19.240128       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:36:19.310200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:19.310391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:36:19.310488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.310570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:36:19.310642       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:36:19.310718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:36:19.310788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:19.310860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:36:19.310940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:19.311017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:19.311261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:36:19.312650       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:36:20.163865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.194900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:36:20.263532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:36:20.308670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:36:20.382950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:36:20.414192       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0814 09:36:23.340697       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:35:48 UTC, end at Sat 2021-08-14 09:41:03 UTC. --
	Aug 14 09:40:49 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:49.273417    3866 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036536    3866 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036818    3866 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036870    3866 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036899    3866 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036911    3866 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.036919    3866 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037104    3866 remote_runtime.go:62] parsed scheme: ""
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037112    3866 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037144    3866 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037152    3866 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037200    3866 remote_image.go:50] parsed scheme: ""
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037205    3866 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037212    3866 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037216    3866 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037281    3866 kubelet.go:404] "Attempting to sync node with API server"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037309    3866 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037374    3866 kubelet.go:283] "Adding apiserver pod source"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.037394    3866 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.038756    3866 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: E0814 09:40:54.305472    3866 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 14 09:40:54 pause-20210814093545-6746 kubelet[3866]: I0814 09:40:54.306013    3866 server.go:1190] "Started kubelet"
	Aug 14 09:40:54 pause-20210814093545-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:40:54 pause-20210814093545-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [a747d02c262537c9ce9782219ded7061699497fbd9be1e90c200e5ad7875e564] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 154 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00013b210, 0xc000000002)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00013b200)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc00052a720, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000140f00, 0x18e5530, 0xc00051d0c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000362440)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000362440, 0x18b3d60, 0xc000708690, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000362440, 0x3b9aca00, 0x0, 0x1, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000362440, 0x3b9aca00, 0xc0006441e0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210814093545-6746 -n pause-20210814093545-6746
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210814093545-6746 -n pause-20210814093545-6746: exit status 2 (312.761094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210814093545-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210814093545-6746 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210814093545-6746 describe pod : exit status 1 (45.471326ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210814093545-6746 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (14.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210814093902-6746 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210814093902-6746 --alsologtostderr -v=1: exit status 80 (1.792190942s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210814093902-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:16.567739  199799 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:43:16.567828  199799 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:16.567838  199799 out.go:311] Setting ErrFile to fd 2...
	I0814 09:43:16.567842  199799 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:16.567950  199799 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:43:16.568109  199799 out.go:305] Setting JSON to false
	I0814 09:43:16.568126  199799 mustload.go:65] Loading cluster: old-k8s-version-20210814093902-6746
	I0814 09:43:16.568434  199799 config.go:177] Loaded profile config "old-k8s-version-20210814093902-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0814 09:43:16.568822  199799 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210814093902-6746 --format={{.State.Status}}
	I0814 09:43:16.613226  199799 host.go:66] Checking if "old-k8s-version-20210814093902-6746" exists ...
	I0814 09:43:16.613878  199799 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210814093902-6746 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0814 09:43:16.616340  199799 out.go:177] * Pausing node old-k8s-version-20210814093902-6746 ... 
	I0814 09:43:16.616365  199799 host.go:66] Checking if "old-k8s-version-20210814093902-6746" exists ...
	I0814 09:43:16.616620  199799 ssh_runner.go:149] Run: systemctl --version
	I0814 09:43:16.616669  199799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210814093902-6746
	I0814 09:43:16.662797  199799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/old-k8s-version-20210814093902-6746/id_rsa Username:docker}
	I0814 09:43:16.760381  199799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:43:16.769415  199799 pause.go:50] kubelet running: true
	I0814 09:43:16.769468  199799 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:43:16.874869  199799 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:43:16.874997  199799 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:43:16.941010  199799 cri.go:76] found id: "70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586"
	I0814 09:43:16.941036  199799 cri.go:76] found id: "0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607"
	I0814 09:43:16.941041  199799 cri.go:76] found id: "330312560468f89bccfc3819edc3570f829561ec6f0f09fa8aa01c0a72a5daf0"
	I0814 09:43:16.941045  199799 cri.go:76] found id: "6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61"
	I0814 09:43:16.941048  199799 cri.go:76] found id: "82bbbe4ef766ce7a77beb6a35c1a1d7d974312fb0d790b588286c07ecfe223c1"
	I0814 09:43:16.941052  199799 cri.go:76] found id: "10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab"
	I0814 09:43:16.941056  199799 cri.go:76] found id: "da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3"
	I0814 09:43:16.941059  199799 cri.go:76] found id: "4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74"
	I0814 09:43:16.941062  199799 cri.go:76] found id: "18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88"
	I0814 09:43:16.941068  199799 cri.go:76] found id: "c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010"
	I0814 09:43:16.941072  199799 cri.go:76] found id: "6480fa93026a8af34416607fdd4aaf2f4ac02c7d1ee1ab23bae4536b7e7f2823"
	I0814 09:43:16.941075  199799 cri.go:76] found id: "66a9ce36611591121dee71fec40dd87cd51874561a51962cc10ca295869204e7"
	I0814 09:43:16.941078  199799 cri.go:76] found id: "3bc1ef69b5579728cea73b69d76eae4d026a708c7c438e4ca5de873dca0cb3f1"
	I0814 09:43:16.941082  199799 cri.go:76] found id: "495e84cfb1834bce1069b2626727d804342cc869f3d557981840648e736172d4"
	I0814 09:43:16.941087  199799 cri.go:76] found id: "012cb6d80c0ca231cf5aed243ba1167ad4ad7002149ac53e658d8d6294c43603"
	I0814 09:43:16.941092  199799 cri.go:76] found id: "f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0"
	I0814 09:43:16.941096  199799 cri.go:76] found id: "627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf"
	I0814 09:43:16.941100  199799 cri.go:76] found id: ""
	I0814 09:43:16.941140  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:43:16.978901  199799 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607","pid":2804,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607/rootfs","created":"2021-08-14T09:42:56.608933782Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","pid":789,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","rootfs":"/run/containerd/io.containerd.runtime.v2.
task/k8s.io/0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573/rootfs","created":"2021-08-14T09:42:03.108998536Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210814093902-6746_3a9cb0607c644e32b5d6d0cd9bcdb263"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab","pid":1458,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab/rootfs","created":"2021-08-14T09:42:09.380997742Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernet
es.cri.sandbox-id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88","pid":990,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88/rootfs","created":"2021-08-14T09:42:03.669096761Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","pid":1412,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c",
"rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c/rootfs","created":"2021-08-14T09:42:09.433139534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/default_busybox_c017c9e9-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da","pid":924,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da/rootfs","created":"2021-08-14T09:42:03.485060306Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"241b264faab2c272861991a9cc1e7625
e13902bc70d6c3918403df18a47d88da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210814093902-6746_1d3c6a1f7352c06dc67798687929cb60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6","pid":1652,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6/rootfs","created":"2021-08-14T09:42:09.996978881Z","annotations":{"io.kubernetes.cri.container-name":"busybox","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","pid":916,"status":"running","bundle":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c/rootfs","created":"2021-08-14T09:42:03.449046774Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210814093902-6746_2f8c03c3dd63840ab7d03ee612530660"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","pid":1289,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290/rootfs","created":"2021-08-14T09:42:09.1016778
33Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xnmq2_90e06827-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74","pid":1018,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74/rootfs","created":"2021-08-14T09:42:03.748895655Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f21be6d02bb655931f82a3056a46963
da1f91d35ea2c34ec09ff925f18fd69","pid":1404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69/rootfs","created":"2021-08-14T09:42:09.40238851Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9rbws_90e08770-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf","pid":2327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b
353cb573f07a7a1a09de5d2edf/rootfs","created":"2021-08-14T09:42:25.928995816Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","pid":907,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d/rootfs","created":"2021-08-14T09:42:03.405000912Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210814093902-674
6_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61","pid":1592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61/rootfs","created":"2021-08-14T09:42:10.029016373Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586","pid":2811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586","rootfs":"/run/containerd/io.containerd.runtime.v2.t
ask/k8s.io/70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586/rootfs","created":"2021-08-14T09:42:56.608985757Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","pid":1414,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277/rootfs","created":"2021-08-14T09:42:09.400981Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_cor
edns-fb8b8dccf-nfccv_90c2ae3f-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","pid":2200,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7/rootfs","created":"2021-08-14T09:42:25.537253371Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-wbkxs_ee42e462-fce3-11eb-8319-0242c0a83a02"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","pid":1329,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a773999
086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938/rootfs","created":"2021-08-14T09:42:09.201584245Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_91a7567d-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010","pid":821,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010/rootfs","created":"2021-08-14T09:42:03.34903238Z","annotations":{"io.kubernetes.cri.container-name":
"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3","pid":1058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3/rootfs","created":"2021-08-14T09:42:03.837838826Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","pid":2289,"status":"running","bundle":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39/rootfs","created":"2021-08-14T09:42:25.725047122Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-q7m9p_ee4fc5ca-fce3-11eb-8319-0242c0a83a02"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","pid":2281,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a/rootfs","created":"2021-08-14T09:42:25.712992
64Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-7z4lv_ee4fb864-fce3-11eb-8319-0242c0a83a02"},"owner":"root"}]
	I0814 09:43:16.979136  199799 cri.go:113] list returned 22 containers
	I0814 09:43:16.979149  199799 cri.go:116] container: {ID:0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607 Status:running}
	I0814 09:43:16.979160  199799 cri.go:116] container: {ID:0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573 Status:running}
	I0814 09:43:16.979165  199799 cri.go:118] skipping 0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573 - not in ps
	I0814 09:43:16.979172  199799 cri.go:116] container: {ID:10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab Status:running}
	I0814 09:43:16.979176  199799 cri.go:116] container: {ID:18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88 Status:running}
	I0814 09:43:16.979183  199799 cri.go:116] container: {ID:1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c Status:running}
	I0814 09:43:16.979187  199799 cri.go:118] skipping 1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c - not in ps
	I0814 09:43:16.979190  199799 cri.go:116] container: {ID:241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da Status:running}
	I0814 09:43:16.979199  199799 cri.go:118] skipping 241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da - not in ps
	I0814 09:43:16.979205  199799 cri.go:116] container: {ID:2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6 Status:running}
	I0814 09:43:16.979209  199799 cri.go:118] skipping 2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6 - not in ps
	I0814 09:43:16.979215  199799 cri.go:116] container: {ID:3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c Status:running}
	I0814 09:43:16.979220  199799 cri.go:118] skipping 3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c - not in ps
	I0814 09:43:16.979226  199799 cri.go:116] container: {ID:40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290 Status:running}
	I0814 09:43:16.979230  199799 cri.go:118] skipping 40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290 - not in ps
	I0814 09:43:16.979237  199799 cri.go:116] container: {ID:4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74 Status:running}
	I0814 09:43:16.979242  199799 cri.go:116] container: {ID:57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69 Status:running}
	I0814 09:43:16.979250  199799 cri.go:118] skipping 57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69 - not in ps
	I0814 09:43:16.979254  199799 cri.go:116] container: {ID:627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf Status:running}
	I0814 09:43:16.979257  199799 cri.go:116] container: {ID:6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d Status:running}
	I0814 09:43:16.979262  199799 cri.go:118] skipping 6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d - not in ps
	I0814 09:43:16.979268  199799 cri.go:116] container: {ID:6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61 Status:running}
	I0814 09:43:16.979272  199799 cri.go:116] container: {ID:70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586 Status:running}
	I0814 09:43:16.979278  199799 cri.go:116] container: {ID:98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277 Status:running}
	I0814 09:43:16.979282  199799 cri.go:118] skipping 98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277 - not in ps
	I0814 09:43:16.979288  199799 cri.go:116] container: {ID:9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7 Status:running}
	I0814 09:43:16.979292  199799 cri.go:118] skipping 9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7 - not in ps
	I0814 09:43:16.979302  199799 cri.go:116] container: {ID:a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938 Status:running}
	I0814 09:43:16.979309  199799 cri.go:118] skipping a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938 - not in ps
	I0814 09:43:16.979312  199799 cri.go:116] container: {ID:c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010 Status:running}
	I0814 09:43:16.979316  199799 cri.go:116] container: {ID:da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3 Status:running}
	I0814 09:43:16.979323  199799 cri.go:116] container: {ID:f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39 Status:running}
	I0814 09:43:16.979328  199799 cri.go:118] skipping f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39 - not in ps
	I0814 09:43:16.979333  199799 cri.go:116] container: {ID:f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a Status:running}
	I0814 09:43:16.979337  199799 cri.go:118] skipping f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a - not in ps
	I0814 09:43:16.979372  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607
	I0814 09:43:16.993148  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607 10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab
	I0814 09:43:17.005329  199799 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607 10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:43:17Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:43:17.281750  199799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:43:17.291270  199799 pause.go:50] kubelet running: false
	I0814 09:43:17.291320  199799 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:43:17.386230  199799 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:43:17.386304  199799 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:43:17.454231  199799 cri.go:76] found id: "70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586"
	I0814 09:43:17.454260  199799 cri.go:76] found id: "0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607"
	I0814 09:43:17.454264  199799 cri.go:76] found id: "330312560468f89bccfc3819edc3570f829561ec6f0f09fa8aa01c0a72a5daf0"
	I0814 09:43:17.454269  199799 cri.go:76] found id: "6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61"
	I0814 09:43:17.454272  199799 cri.go:76] found id: "82bbbe4ef766ce7a77beb6a35c1a1d7d974312fb0d790b588286c07ecfe223c1"
	I0814 09:43:17.454278  199799 cri.go:76] found id: "10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab"
	I0814 09:43:17.454283  199799 cri.go:76] found id: "da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3"
	I0814 09:43:17.454288  199799 cri.go:76] found id: "4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74"
	I0814 09:43:17.454293  199799 cri.go:76] found id: "18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88"
	I0814 09:43:17.454303  199799 cri.go:76] found id: "c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010"
	I0814 09:43:17.454316  199799 cri.go:76] found id: "6480fa93026a8af34416607fdd4aaf2f4ac02c7d1ee1ab23bae4536b7e7f2823"
	I0814 09:43:17.454321  199799 cri.go:76] found id: "66a9ce36611591121dee71fec40dd87cd51874561a51962cc10ca295869204e7"
	I0814 09:43:17.454332  199799 cri.go:76] found id: "3bc1ef69b5579728cea73b69d76eae4d026a708c7c438e4ca5de873dca0cb3f1"
	I0814 09:43:17.454340  199799 cri.go:76] found id: "495e84cfb1834bce1069b2626727d804342cc869f3d557981840648e736172d4"
	I0814 09:43:17.454345  199799 cri.go:76] found id: "012cb6d80c0ca231cf5aed243ba1167ad4ad7002149ac53e658d8d6294c43603"
	I0814 09:43:17.454353  199799 cri.go:76] found id: "f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0"
	I0814 09:43:17.454360  199799 cri.go:76] found id: "627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf"
	I0814 09:43:17.454370  199799 cri.go:76] found id: ""
	I0814 09:43:17.454416  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:43:17.492296  199799 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607","pid":2804,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607/rootfs","created":"2021-08-14T09:42:56.608933782Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","pid":789,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","rootfs":"/run/containerd/io.containerd.runtime.v2.t
ask/k8s.io/0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573/rootfs","created":"2021-08-14T09:42:03.108998536Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210814093902-6746_3a9cb0607c644e32b5d6d0cd9bcdb263"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab","pid":1458,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab/rootfs","created":"2021-08-14T09:42:09.380997742Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernete
s.cri.sandbox-id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88","pid":990,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88/rootfs","created":"2021-08-14T09:42:03.669096761Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","pid":1412,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","
rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c/rootfs","created":"2021-08-14T09:42:09.433139534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/default_busybox_c017c9e9-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da","pid":924,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da/rootfs","created":"2021-08-14T09:42:03.485060306Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"241b264faab2c272861991a9cc1e7625e
13902bc70d6c3918403df18a47d88da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210814093902-6746_1d3c6a1f7352c06dc67798687929cb60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6","pid":1652,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6/rootfs","created":"2021-08-14T09:42:09.996978881Z","annotations":{"io.kubernetes.cri.container-name":"busybox","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","pid":916,"status":"running","bundle":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c/rootfs","created":"2021-08-14T09:42:03.449046774Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210814093902-6746_2f8c03c3dd63840ab7d03ee612530660"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","pid":1289,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290/rootfs","created":"2021-08-14T09:42:09.10167783
3Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xnmq2_90e06827-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74","pid":1018,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74/rootfs","created":"2021-08-14T09:42:03.748895655Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f21be6d02bb655931f82a3056a46963d
a1f91d35ea2c34ec09ff925f18fd69","pid":1404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69/rootfs","created":"2021-08-14T09:42:09.40238851Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9rbws_90e08770-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf","pid":2327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b3
53cb573f07a7a1a09de5d2edf/rootfs","created":"2021-08-14T09:42:25.928995816Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","pid":907,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d/rootfs","created":"2021-08-14T09:42:03.405000912Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210814093902-6746
_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61","pid":1592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61/rootfs","created":"2021-08-14T09:42:10.029016373Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586","pid":2811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586/rootfs","created":"2021-08-14T09:42:56.608985757Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","pid":1414,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277/rootfs","created":"2021-08-14T09:42:09.400981Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_core
dns-fb8b8dccf-nfccv_90c2ae3f-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","pid":2200,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7/rootfs","created":"2021-08-14T09:42:25.537253371Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-wbkxs_ee42e462-fce3-11eb-8319-0242c0a83a02"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","pid":1329,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7739990
86895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938/rootfs","created":"2021-08-14T09:42:09.201584245Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_91a7567d-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010","pid":821,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010/rootfs","created":"2021-08-14T09:42:03.34903238Z","annotations":{"io.kubernetes.cri.container-name":"
kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3","pid":1058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3/rootfs","created":"2021-08-14T09:42:03.837838826Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","pid":2289,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39/rootfs","created":"2021-08-14T09:42:25.725047122Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-q7m9p_ee4fc5ca-fce3-11eb-8319-0242c0a83a02"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","pid":2281,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a/rootfs","created":"2021-08-14T09:42:25.7129926
4Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-7z4lv_ee4fb864-fce3-11eb-8319-0242c0a83a02"},"owner":"root"}]
	I0814 09:43:17.492523  199799 cri.go:113] list returned 22 containers
	I0814 09:43:17.492538  199799 cri.go:116] container: {ID:0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607 Status:paused}
	I0814 09:43:17.492552  199799 cri.go:122] skipping {0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607 paused}: state = "paused", want "running"
	I0814 09:43:17.492568  199799 cri.go:116] container: {ID:0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573 Status:running}
	I0814 09:43:17.492579  199799 cri.go:118] skipping 0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573 - not in ps
	I0814 09:43:17.492586  199799 cri.go:116] container: {ID:10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab Status:running}
	I0814 09:43:17.492595  199799 cri.go:116] container: {ID:18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88 Status:running}
	I0814 09:43:17.492602  199799 cri.go:116] container: {ID:1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c Status:running}
	I0814 09:43:17.492615  199799 cri.go:118] skipping 1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c - not in ps
	I0814 09:43:17.492623  199799 cri.go:116] container: {ID:241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da Status:running}
	I0814 09:43:17.492631  199799 cri.go:118] skipping 241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da - not in ps
	I0814 09:43:17.492638  199799 cri.go:116] container: {ID:2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6 Status:running}
	I0814 09:43:17.492645  199799 cri.go:118] skipping 2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6 - not in ps
	I0814 09:43:17.492653  199799 cri.go:116] container: {ID:3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c Status:running}
	I0814 09:43:17.492663  199799 cri.go:118] skipping 3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c - not in ps
	I0814 09:43:17.492672  199799 cri.go:116] container: {ID:40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290 Status:running}
	I0814 09:43:17.492679  199799 cri.go:118] skipping 40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290 - not in ps
	I0814 09:43:17.492685  199799 cri.go:116] container: {ID:4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74 Status:running}
	I0814 09:43:17.492696  199799 cri.go:116] container: {ID:57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69 Status:running}
	I0814 09:43:17.492707  199799 cri.go:118] skipping 57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69 - not in ps
	I0814 09:43:17.492715  199799 cri.go:116] container: {ID:627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf Status:running}
	I0814 09:43:17.492723  199799 cri.go:116] container: {ID:6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d Status:running}
	I0814 09:43:17.492733  199799 cri.go:118] skipping 6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d - not in ps
	I0814 09:43:17.492740  199799 cri.go:116] container: {ID:6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61 Status:running}
	I0814 09:43:17.492748  199799 cri.go:116] container: {ID:70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586 Status:running}
	I0814 09:43:17.492756  199799 cri.go:116] container: {ID:98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277 Status:running}
	I0814 09:43:17.492765  199799 cri.go:118] skipping 98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277 - not in ps
	I0814 09:43:17.492773  199799 cri.go:116] container: {ID:9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7 Status:running}
	I0814 09:43:17.492782  199799 cri.go:118] skipping 9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7 - not in ps
	I0814 09:43:17.492790  199799 cri.go:116] container: {ID:a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938 Status:running}
	I0814 09:43:17.492820  199799 cri.go:118] skipping a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938 - not in ps
	I0814 09:43:17.492828  199799 cri.go:116] container: {ID:c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010 Status:running}
	I0814 09:43:17.492838  199799 cri.go:116] container: {ID:da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3 Status:running}
	I0814 09:43:17.492847  199799 cri.go:116] container: {ID:f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39 Status:running}
	I0814 09:43:17.492857  199799 cri.go:118] skipping f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39 - not in ps
	I0814 09:43:17.492865  199799 cri.go:116] container: {ID:f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a Status:running}
	I0814 09:43:17.492876  199799 cri.go:118] skipping f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a - not in ps
	I0814 09:43:17.492920  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab
	I0814 09:43:17.507076  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab 18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88
	I0814 09:43:17.519327  199799 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab 18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:43:17Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:43:18.060072  199799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:43:18.069628  199799 pause.go:50] kubelet running: false
	I0814 09:43:18.069697  199799 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:43:18.164533  199799 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:43:18.164613  199799 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:43:18.230488  199799 cri.go:76] found id: "70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586"
	I0814 09:43:18.230511  199799 cri.go:76] found id: "0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607"
	I0814 09:43:18.230516  199799 cri.go:76] found id: "330312560468f89bccfc3819edc3570f829561ec6f0f09fa8aa01c0a72a5daf0"
	I0814 09:43:18.230519  199799 cri.go:76] found id: "6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61"
	I0814 09:43:18.230523  199799 cri.go:76] found id: "82bbbe4ef766ce7a77beb6a35c1a1d7d974312fb0d790b588286c07ecfe223c1"
	I0814 09:43:18.230528  199799 cri.go:76] found id: "10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab"
	I0814 09:43:18.230531  199799 cri.go:76] found id: "da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3"
	I0814 09:43:18.230534  199799 cri.go:76] found id: "4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74"
	I0814 09:43:18.230537  199799 cri.go:76] found id: "18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88"
	I0814 09:43:18.230544  199799 cri.go:76] found id: "c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010"
	I0814 09:43:18.230547  199799 cri.go:76] found id: "6480fa93026a8af34416607fdd4aaf2f4ac02c7d1ee1ab23bae4536b7e7f2823"
	I0814 09:43:18.230551  199799 cri.go:76] found id: "66a9ce36611591121dee71fec40dd87cd51874561a51962cc10ca295869204e7"
	I0814 09:43:18.230554  199799 cri.go:76] found id: "3bc1ef69b5579728cea73b69d76eae4d026a708c7c438e4ca5de873dca0cb3f1"
	I0814 09:43:18.230558  199799 cri.go:76] found id: "495e84cfb1834bce1069b2626727d804342cc869f3d557981840648e736172d4"
	I0814 09:43:18.230565  199799 cri.go:76] found id: "012cb6d80c0ca231cf5aed243ba1167ad4ad7002149ac53e658d8d6294c43603"
	I0814 09:43:18.230573  199799 cri.go:76] found id: "f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0"
	I0814 09:43:18.230577  199799 cri.go:76] found id: "627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf"
	I0814 09:43:18.230585  199799 cri.go:76] found id: ""
	I0814 09:43:18.230623  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:43:18.268440  199799 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607","pid":2804,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607/rootfs","created":"2021-08-14T09:42:56.608933782Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","pid":789,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","rootfs":"/run/containerd/io.containerd.runtime.v2.t
ask/k8s.io/0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573/rootfs","created":"2021-08-14T09:42:03.108998536Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210814093902-6746_3a9cb0607c644e32b5d6d0cd9bcdb263"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab","pid":1458,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab/rootfs","created":"2021-08-14T09:42:09.380997742Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes
.cri.sandbox-id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88","pid":990,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88/rootfs","created":"2021-08-14T09:42:03.669096761Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","pid":1412,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","r
ootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c/rootfs","created":"2021-08-14T09:42:09.433139534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/default_busybox_c017c9e9-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da","pid":924,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da/rootfs","created":"2021-08-14T09:42:03.485060306Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"241b264faab2c272861991a9cc1e7625e1
3902bc70d6c3918403df18a47d88da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210814093902-6746_1d3c6a1f7352c06dc67798687929cb60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6","pid":1652,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6/rootfs","created":"2021-08-14T09:42:09.996978881Z","annotations":{"io.kubernetes.cri.container-name":"busybox","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","pid":916,"status":"running","bundle":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c/rootfs","created":"2021-08-14T09:42:03.449046774Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210814093902-6746_2f8c03c3dd63840ab7d03ee612530660"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","pid":1289,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290/rootfs","created":"2021-08-14T09:42:09.101677833
Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xnmq2_90e06827-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74","pid":1018,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74/rootfs","created":"2021-08-14T09:42:03.748895655Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f21be6d02bb655931f82a3056a46963da
1f91d35ea2c34ec09ff925f18fd69","pid":1404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69/rootfs","created":"2021-08-14T09:42:09.40238851Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9rbws_90e08770-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf","pid":2327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b35
3cb573f07a7a1a09de5d2edf/rootfs","created":"2021-08-14T09:42:25.928995816Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","pid":907,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d/rootfs","created":"2021-08-14T09:42:03.405000912Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210814093902-6746_
ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61","pid":1592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61/rootfs","created":"2021-08-14T09:42:10.029016373Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586","pid":2811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586","rootfs":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586/rootfs","created":"2021-08-14T09:42:56.608985757Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","pid":1414,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277/rootfs","created":"2021-08-14T09:42:09.400981Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_cored
ns-fb8b8dccf-nfccv_90c2ae3f-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","pid":2200,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7/rootfs","created":"2021-08-14T09:42:25.537253371Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-wbkxs_ee42e462-fce3-11eb-8319-0242c0a83a02"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","pid":1329,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a77399908
6895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938/rootfs","created":"2021-08-14T09:42:09.201584245Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_91a7567d-fce3-11eb-977c-0242f298e734"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010","pid":821,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010/rootfs","created":"2021-08-14T09:42:03.34903238Z","annotations":{"io.kubernetes.cri.container-name":"k
ube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3","pid":1058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3/rootfs","created":"2021-08-14T09:42:03.837838826Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","pid":2289,"status":"running","bundle":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39/rootfs","created":"2021-08-14T09:42:25.725047122Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-q7m9p_ee4fc5ca-fce3-11eb-8319-0242c0a83a02"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","pid":2281,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a/rootfs","created":"2021-08-14T09:42:25.71299264
Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-7z4lv_ee4fb864-fce3-11eb-8319-0242c0a83a02"},"owner":"root"}]
	I0814 09:43:18.268690  199799 cri.go:113] list returned 22 containers
	I0814 09:43:18.268708  199799 cri.go:116] container: {ID:0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607 Status:paused}
	I0814 09:43:18.268723  199799 cri.go:122] skipping {0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607 paused}: state = "paused", want "running"
	I0814 09:43:18.268739  199799 cri.go:116] container: {ID:0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573 Status:running}
	I0814 09:43:18.268749  199799 cri.go:118] skipping 0d451db486447d38fde0a8e040baadaf60c9eab21d43ec1ff3b1814992fa9573 - not in ps
	I0814 09:43:18.268759  199799 cri.go:116] container: {ID:10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab Status:paused}
	I0814 09:43:18.268772  199799 cri.go:122] skipping {10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab paused}: state = "paused", want "running"
	I0814 09:43:18.268782  199799 cri.go:116] container: {ID:18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88 Status:running}
	I0814 09:43:18.268812  199799 cri.go:116] container: {ID:1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c Status:running}
	I0814 09:43:18.268822  199799 cri.go:118] skipping 1a4b21d4ab6458610792e500b61cfb1529cb6409684069514d25b4694d8b6d6c - not in ps
	I0814 09:43:18.268830  199799 cri.go:116] container: {ID:241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da Status:running}
	I0814 09:43:18.268837  199799 cri.go:118] skipping 241b264faab2c272861991a9cc1e7625e13902bc70d6c3918403df18a47d88da - not in ps
	I0814 09:43:18.268845  199799 cri.go:116] container: {ID:2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6 Status:running}
	I0814 09:43:18.268855  199799 cri.go:118] skipping 2b23c799a10636b77301a7b57469b595b048956f8402a8f8a2a229cf92d6e2e6 - not in ps
	I0814 09:43:18.268862  199799 cri.go:116] container: {ID:3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c Status:running}
	I0814 09:43:18.268871  199799 cri.go:118] skipping 3af71ccfabc80f758ad3451b442384ae9f0ab9c7a813ceba5c20a56f574c8e6c - not in ps
	I0814 09:43:18.268878  199799 cri.go:116] container: {ID:40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290 Status:running}
	I0814 09:43:18.268889  199799 cri.go:118] skipping 40fe49db557804f2752f0c7ff12e794ad201efdb4fbd4cf17318e1ec1d64b290 - not in ps
	I0814 09:43:18.268898  199799 cri.go:116] container: {ID:4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74 Status:running}
	I0814 09:43:18.268905  199799 cri.go:116] container: {ID:57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69 Status:running}
	I0814 09:43:18.268915  199799 cri.go:118] skipping 57f21be6d02bb655931f82a3056a46963da1f91d35ea2c34ec09ff925f18fd69 - not in ps
	I0814 09:43:18.268923  199799 cri.go:116] container: {ID:627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf Status:running}
	I0814 09:43:18.268931  199799 cri.go:116] container: {ID:6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d Status:running}
	I0814 09:43:18.268940  199799 cri.go:118] skipping 6f3c7a964584ec1362aad1227f07405ef6db3d10f7f2e4577ff68db71f7f156d - not in ps
	I0814 09:43:18.268949  199799 cri.go:116] container: {ID:6f7165ba4ba5228004f5f8068bb79a85bcbf2d66e95907c590aadba5c82bcf61 Status:running}
	I0814 09:43:18.268957  199799 cri.go:116] container: {ID:70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586 Status:running}
	I0814 09:43:18.268965  199799 cri.go:116] container: {ID:98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277 Status:running}
	I0814 09:43:18.268974  199799 cri.go:118] skipping 98da8496f70fa6cf41b87635e2a3b205a1fffcbaad4c56b18241c8bf749c8277 - not in ps
	I0814 09:43:18.268982  199799 cri.go:116] container: {ID:9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7 Status:running}
	I0814 09:43:18.268989  199799 cri.go:118] skipping 9e0c50b7bca6eea5c92a0a2f4ca01d13677400bbfad1ebd7721bc078d52152e7 - not in ps
	I0814 09:43:18.268997  199799 cri.go:116] container: {ID:a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938 Status:running}
	I0814 09:43:18.269005  199799 cri.go:118] skipping a773999086895c17eed6a0200ec417ca44fcf40f3af617eeef2108afafb9a938 - not in ps
	I0814 09:43:18.269013  199799 cri.go:116] container: {ID:c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010 Status:running}
	I0814 09:43:18.269020  199799 cri.go:116] container: {ID:da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3 Status:running}
	I0814 09:43:18.269029  199799 cri.go:116] container: {ID:f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39 Status:running}
	I0814 09:43:18.269038  199799 cri.go:118] skipping f018a356df108b145ce312587c36cdd5edf7f79b92a5350c0ed6a14c53c73c39 - not in ps
	I0814 09:43:18.269045  199799 cri.go:116] container: {ID:f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a Status:running}
	I0814 09:43:18.269057  199799 cri.go:118] skipping f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a - not in ps
	I0814 09:43:18.269105  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88
	I0814 09:43:18.283019  199799 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88 4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74
	I0814 09:43:18.298245  199799 out.go:177] 
	W0814 09:43:18.298395  199799 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88 4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:43:18Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88 4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:43:18Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0814 09:43:18.298416  199799 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0814 09:43:18.301238  199799 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0814 09:43:18.302419  199799 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210814093902-6746 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210814093902-6746
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210814093902-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d",
	        "Created": "2021-08-14T09:39:04.181006779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:41:39.607314274Z",
	            "FinishedAt": "2021-08-14T09:41:37.768809378Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/hostname",
	        "HostsPath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/hosts",
	        "LogPath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d-json.log",
	        "Name": "/old-k8s-version-20210814093902-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210814093902-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210814093902-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210814093902-6746",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210814093902-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210814093902-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210814093902-6746",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210814093902-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0429f21e8f7b461ee1c8edfd3572a341b6c3ea8376909fc0ce2d8f26a6a3d50c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32932"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32929"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32931"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32930"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0429f21e8f7b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210814093902-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "58773ee473db"
	                    ],
	                    "NetworkID": "978faa9d778a689fe2bae1f7ad4f3c1e866ffce513602e1412ddcf32d9deb6cb",
	                    "EndpointID": "61d62faeece6c26082735ebd2b0af7e05b0601241dcaa15c49a8ec422abe2773",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746: exit status 2 (303.104944ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210814093902-6746 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:36:36 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | force-systemd-flag-20210814093636-6746            |                                        |         |         |                               |                               |
	|         | --memory=2048 --force-systemd                     |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker            |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                        |         |         |                               |                               |
	| -p      | force-systemd-flag-20210814093636-6746            | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                        |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:28 UTC |
	|         | force-systemd-flag-20210814093636-6746            |                                        |         |         |                               |                               |
	| start   | -p                                                | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:28 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | force-systemd-env-20210814093728-6746             |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=5 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210814093728-6746             | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                        |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:15 UTC |
	|         | force-systemd-env-20210814093728-6746             |                                        |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:15 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | cert-options-20210814093815-6746                  |                                        |         |         |                               |                               |
	|         | --memory=2048                                     |                                        |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                        |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                        |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                        |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                        |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	| -p      | cert-options-20210814093815-6746                  | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                        |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                        |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:39:02 UTC |
	|         | cert-options-20210814093815-6746                  |                                        |         |         |                               |                               |
	| unpause | -p pause-20210814093545-6746                      | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:48 UTC | Sat, 14 Aug 2021 09:40:48 UTC |
	|         | --alsologtostderr -v=5                            |                                        |         |         |                               |                               |
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:54 UTC | Sat, 14 Aug 2021 09:41:01 UTC |
	|         | -n 25                                             |                                        |         |         |                               |                               |
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:02 UTC | Sat, 14 Aug 2021 09:41:03 UTC |
	|         | -n 25                                             |                                        |         |         |                               |                               |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:03 UTC | Sat, 14 Aug 2021 09:41:06 UTC |
	|         | --alsologtostderr -v=5                            |                                        |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:39:02 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                        |         |         |                               |                               |
	|         | --keep-context=false                              |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	| profile | list --output json                                | minikube                               | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:06 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:07 UTC | Sat, 14 Aug 2021 09:41:08 UTC |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:16 UTC | Sat, 14 Aug 2021 09:41:17 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:17 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                        |         |         |                               |                               |
	|         | --keep-context=false                              |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                        |         |         |                               |                               |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:43:10
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:43:10.295339  198227 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:43:10.295435  198227 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:10.295439  198227 out.go:311] Setting ErrFile to fd 2...
	I0814 09:43:10.295442  198227 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:10.295542  198227 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:43:10.295778  198227 out.go:305] Setting JSON to false
	I0814 09:43:10.332745  198227 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5153,"bootTime":1628929038,"procs":263,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:43:10.332881  198227 start.go:121] virtualization: kvm guest
	I0814 09:43:10.335219  198227 out.go:177] * [no-preload-20210814094108-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:43:10.336630  198227 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:43:10.335357  198227 notify.go:169] Checking for updates...
	I0814 09:43:10.338003  198227 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:43:10.339370  198227 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:43:10.340650  198227 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:43:10.341069  198227 config.go:177] Loaded profile config "no-preload-20210814094108-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:43:10.341459  198227 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:43:10.388220  198227 docker.go:132] docker version: linux-19.03.15
	I0814 09:43:10.388296  198227 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:43:10.466688  198227 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:43:10.423254246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:43:10.466804  198227 docker.go:244] overlay module found
	I0814 09:43:10.468774  198227 out.go:177] * Using the docker driver based on existing profile
	I0814 09:43:10.468807  198227 start.go:278] selected driver: docker
	I0814 09:43:10.468814  198227 start.go:751] validating driver "docker" against &{Name:no-preload-20210814094108-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210814094108-6746 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTime
out:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:43:10.468930  198227 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:43:10.468971  198227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:43:10.468987  198227 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:43:10.470385  198227 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:43:10.471217  198227 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:43:10.547707  198227 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:43:10.505716548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0814 09:43:10.547830  198227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:43:10.547863  198227 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:43:10.549708  198227 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:43:10.549804  198227 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:43:10.549830  198227 cni.go:93] Creating CNI manager for ""
	I0814 09:43:10.549837  198227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:43:10.549850  198227 start_flags.go:277] config:
	{Name:no-preload-20210814094108-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210814094108-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRe
quested:false ExtraDisks:0}
	I0814 09:43:10.551692  198227 out.go:177] * Starting control plane node no-preload-20210814094108-6746 in cluster no-preload-20210814094108-6746
	I0814 09:43:10.551725  198227 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:43:10.552935  198227 out.go:177] * Pulling base image ...
	I0814 09:43:10.552969  198227 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:43:10.553055  198227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:43:10.553086  198227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/config.json ...
	I0814 09:43:10.553290  198227 cache.go:108] acquiring lock: {Name:mk45577cc3748bb07affaae091a26e8410047cac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553291  198227 cache.go:108] acquiring lock: {Name:mk4723e7eabe6689e250edc786d48af6de99ffbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553291  198227 cache.go:108] acquiring lock: {Name:mk4b87712df5985ae10899cad089779def4ce8b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553384  198227 cache.go:108] acquiring lock: {Name:mk7018cedaea6dcae7ca085fae097c7ae1351038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553384  198227 cache.go:108] acquiring lock: {Name:mk5d8f79fe96efa08c9b364c312d80509c3c09c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553415  198227 cache.go:108] acquiring lock: {Name:mk89f9a4f4f0278092d93ef1c75e49ac69a8b3d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553425  198227 cache.go:108] acquiring lock: {Name:mkf39f81098acf22603e4dcac428e043084b67f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553462  198227 cache.go:108] acquiring lock: {Name:mkb632bb0773648f7fd3acb464d108826a0e8e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553487  198227 cache.go:108] acquiring lock: {Name:mk2b036cb72c961bb5a0d34fa35818f29318d0a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553515  198227 cache.go:108] acquiring lock: {Name:mk6842265b6ce64ef0fb74765284422364c16edd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553537  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0814 09:43:10.553550  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0814 09:43:10.553565  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0814 09:43:10.553563  198227 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 189.925µs
	I0814 09:43:10.553569  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0814 09:43:10.553582  198227 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553544  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0814 09:43:10.553586  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0814 09:43:10.553589  198227 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 308.56µs
	I0814 09:43:10.553588  198227 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 245.165µs
	I0814 09:43:10.553591  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0814 09:43:10.553522  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0814 09:43:10.553599  198227 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 320.855µs
	I0814 09:43:10.553616  198227 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0814 09:43:10.553610  198227 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553608  198227 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 127.046µs
	I0814 09:43:10.553610  198227 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 151.324µs
	I0814 09:43:10.553628  198227 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0814 09:43:10.553625  198227 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 345.434µs
	I0814 09:43:10.553621  198227 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 109.417µs
	I0814 09:43:10.553638  198227 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553616  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0814 09:43:10.553641  198227 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0814 09:43:10.553647  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0814 09:43:10.553658  198227 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 258.034µs
	I0814 09:43:10.553663  198227 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 311.466µs
	I0814 09:43:10.553682  198227 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0814 09:43:10.553633  198227 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0814 09:43:10.553602  198227 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0814 09:43:10.553670  198227 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553700  198227 cache.go:88] Successfully saved all images to host disk.
	I0814 09:43:10.627140  198227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:43:10.627168  198227 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:43:10.627185  198227 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:43:10.627216  198227 start.go:313] acquiring machines lock for no-preload-20210814094108-6746: {Name:mkedefaa2332f31f505548533b13d397c9430bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.627289  198227 start.go:317] acquired machines lock for "no-preload-20210814094108-6746" in 56.323µs
	I0814 09:43:10.627306  198227 start.go:93] Skipping create...Using existing machine configuration
	I0814 09:43:10.627310  198227 fix.go:55] fixHost starting: 
	I0814 09:43:10.627544  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:43:10.664929  198227 fix.go:108] recreateIfNeeded on no-preload-20210814094108-6746: state=Stopped err=<nil>
	W0814 09:43:10.664977  198227 fix.go:134] unexpected machine state, will restart: <nil>
	I0814 09:43:10.667151  198227 out.go:177] * Restarting existing docker container for "no-preload-20210814094108-6746" ...
	I0814 09:43:10.667225  198227 cli_runner.go:115] Run: docker start no-preload-20210814094108-6746
	I0814 09:43:11.910546  198227 cli_runner.go:168] Completed: docker start no-preload-20210814094108-6746: (1.243289941s)
	I0814 09:43:11.910624  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:43:11.954979  198227 kic.go:420] container "no-preload-20210814094108-6746" state is running.
	I0814 09:43:11.955476  198227 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210814094108-6746
	I0814 09:43:12.006981  198227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/config.json ...
	I0814 09:43:12.007177  198227 machine.go:88] provisioning docker machine ...
	I0814 09:43:12.007201  198227 ubuntu.go:169] provisioning hostname "no-preload-20210814094108-6746"
	I0814 09:43:12.007262  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:12.048149  198227 main.go:130] libmachine: Using SSH client type: native
	I0814 09:43:12.048384  198227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0814 09:43:12.048401  198227 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210814094108-6746 && echo "no-preload-20210814094108-6746" | sudo tee /etc/hostname
	I0814 09:43:12.048928  198227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36356->127.0.0.1:32938: read: connection reset by peer
	I0814 09:43:15.191850  198227 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210814094108-6746
	
	I0814 09:43:15.191919  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:15.233326  198227 main.go:130] libmachine: Using SSH client type: native
	I0814 09:43:15.233477  198227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0814 09:43:15.233496  198227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210814094108-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210814094108-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210814094108-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	f62389f4ed462       523cad1a4df73       2 seconds ago        Exited              dashboard-metrics-scraper   3                   f693d0e825b58
	70a2684b60ecd       6e38f40d628db       22 seconds ago       Running             storage-provisioner         2                   a773999086895
	0a1774cc304d5       eb516548c180f       22 seconds ago       Running             coredns                     2                   98da8496f70fa
	627ec2ed6d607       9a07b5b4bfac0       53 seconds ago       Running             kubernetes-dashboard        0                   f018a356df108
	2b23c799a1063       56cc512116c8f       About a minute ago   Running             busybox                     1                   1a4b21d4ab645
	330312560468f       eb516548c180f       About a minute ago   Exited              coredns                     1                   98da8496f70fa
	6f7165ba4ba52       6de166512aa22       About a minute ago   Running             kindnet-cni                 1                   57f21be6d02bb
	82bbbe4ef766c       6e38f40d628db       About a minute ago   Exited              storage-provisioner         1                   a773999086895
	10ba05c467b3a       5cd54e388abaf       About a minute ago   Running             kube-proxy                  1                   40fe49db55780
	da2819ec20ab5       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              1                   241b264faab2c
	4a5588c5ee462       2c4adeb21b4ff       About a minute ago   Running             etcd                        1                   3af71ccfabc80
	18664f66f1c4c       00638a24688b0       About a minute ago   Running             kube-scheduler              1                   6f3c7a964584e
	c68b5e7c89b1d       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   0d451db486447
	c81024767969c       56cc512116c8f       2 minutes ago        Exited              busybox                     0                   cef340ae68014
	6480fa93026a8       6de166512aa22       3 minutes ago        Exited              kindnet-cni                 0                   5962da6023694
	66a9ce3661159       5cd54e388abaf       3 minutes ago        Exited              kube-proxy                  0                   1658fa912407e
	3bc1ef69b5579       2c4adeb21b4ff       3 minutes ago        Exited              etcd                        0                   bb6248b4cbda4
	495e84cfb1834       00638a24688b0       3 minutes ago        Exited              kube-scheduler              0                   2ae5c9fdc7988
	012cb6d80c0ca       ecf910f40d6e0       3 minutes ago        Exited              kube-apiserver              0                   37ea6089989ff
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:41:39 UTC, end at Sat 2021-08-14 09:43:19 UTC. --
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.443338522Z" level=info msg="TearDown network for sandbox \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\" successfully"
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.443353853Z" level=info msg="StopPodSandbox for \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\" returns successfully"
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.443552832Z" level=info msg="RemovePodSandbox for \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\""
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.447221763Z" level=info msg="RemovePodSandbox \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\" returns successfully"
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.825096510Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.902190352Z" level=info msg="Finish piping \"stderr\" of container exec \"b6b766c74b2ab40b7e46a655648f95f52532f0044b827ce97947549ce61d8ebc\""
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.902194357Z" level=info msg="Finish piping \"stdout\" of container exec \"b6b766c74b2ab40b7e46a655648f95f52532f0044b827ce97947549ce61d8ebc\""
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.902275127Z" level=info msg="Exec process \"b6b766c74b2ab40b7e46a655648f95f52532f0044b827ce97947549ce61d8ebc\" exits with exit code 0 and error <nil>"
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.903417602Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" returns with exit code 0"
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.825156747Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.907383716Z" level=info msg="Finish piping \"stderr\" of container exec \"ecf7606cda012a59b6438d32d5f8f4a3bccebc838576e7450b8b4698fb9ac98a\""
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.907383691Z" level=info msg="Finish piping \"stdout\" of container exec \"ecf7606cda012a59b6438d32d5f8f4a3bccebc838576e7450b8b4698fb9ac98a\""
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.907575915Z" level=info msg="Exec process \"ecf7606cda012a59b6438d32d5f8f4a3bccebc838576e7450b8b4698fb9ac98a\" exits with exit code 0 and error <nil>"
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.908736770Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" returns with exit code 0"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.417038754Z" level=info msg="CreateContainer within sandbox \"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,}"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.441904009Z" level=info msg="CreateContainer within sandbox \"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.442263004Z" level=info msg="StartContainer for \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.592420647Z" level=info msg="StartContainer for \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\" returns successfully"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.621211117Z" level=info msg="Finish piping stdout of container \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.621268095Z" level=info msg="Finish piping stderr of container \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.621992529Z" level=info msg="TaskExit event &TaskExit{ContainerID:f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0,ID:f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0,Pid:3076,ExitStatus:1,ExitedAt:2021-08-14 09:43:16.621740512 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.677535962Z" level=info msg="shim disconnected" id=f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.677627706Z" level=error msg="copy shim log" error="read /proc/self/fd/152: file already closed"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.701953368Z" level=info msg="RemoveContainer for \"4fde8df1c8495a1c75ba07dbc26af23d1ac80b19740c938c7f22d070b735feed\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.708085183Z" level=info msg="RemoveContainer for \"4fde8df1c8495a1c75ba07dbc26af23d1ac80b19740c938c7f22d070b735feed\" returns successfully"
	
	* 
	* ==> coredns [0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607] <==
	* .:53
	2021-08-14T09:42:56.743Z [INFO] CoreDNS-1.3.1
	2021-08-14T09:42:56.743Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-14T09:42:56.743Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	
	* 
	* ==> coredns [330312560468f89bccfc3819edc3570f829561ec6f0f09fa8aa01c0a72a5daf0] <==
	* .:53
	2021-08-14T09:42:15.002Z [INFO] CoreDNS-1.3.1
	2021-08-14T09:42:15.002Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-14T09:42:15.002Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	E0814 09:42:40.003024       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0814 09:42:40.003024       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-nfccv.unknownuser.log.ERROR.20210814-094240.1: no such file or directory
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210814093902-6746
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210814093902-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=old-k8s-version-20210814093902-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_39_34_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:39:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20210814093902-6746
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	System Info:
	 Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	 System UUID:                932371fb-7e85-41f3-aeda-17115a947456
	 Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	 Kernel Version:             4.9.0-16-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.9
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (12 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                coredns-fb8b8dccf-nfccv                                        100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m31s
	  kube-system                etcd-old-k8s-version-20210814093902-6746                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                kindnet-9rbws                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m31s
	  kube-system                kube-apiserver-old-k8s-version-20210814093902-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                kube-controller-manager-old-k8s-version-20210814093902-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                kube-proxy-xnmq2                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                kube-scheduler-old-k8s-version-20210814093902-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                metrics-server-8546d8b77b-wbkxs                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-7z4lv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-q7m9p                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  Starting                 3m55s                  kubelet, old-k8s-version-20210814093902-6746     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m55s (x8 over 3m55s)  kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x8 over 3m55s)  kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x7 over 3m55s)  kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet, old-k8s-version-20210814093902-6746     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m30s                  kube-proxy, old-k8s-version-20210814093902-6746  Starting kube-proxy.
	  Normal  Starting                 77s                    kubelet, old-k8s-version-20210814093902-6746     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)      kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x7 over 77s)      kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x8 over 77s)      kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                    kubelet, old-k8s-version-20210814093902-6746     Updated Node Allocatable limit across pods
	  Normal  Starting                 69s                    kube-proxy, old-k8s-version-20210814093902-6746  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000001] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000000] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.004033] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000002] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +8.084358] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth95211dff
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce ce 48 d0 08 39 08 06        ........H..9..
	[  +0.103008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000003] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.000001] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000001] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.020470] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth1aaa4059
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 6c 08 49 4a d1 08 06        ......2l.IJ...
	[  +0.000256] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth7067e1ac
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e 8d d5 44 9e 30 08 06        .........D.0..
	[ +11.959520] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfa4c84cf
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 55 0e 07 67 26 08 06        .......U..g&..
	[  +3.495552] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethc578d1e4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 06 df f9 78 a3 27 08 06        .........x.'..
	[  +8.611512] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth2cdba4ed
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 1c e7 62 8b 4d 08 06        .........b.M..
	[Aug14 09:43] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [3bc1ef69b5579728cea73b69d76eae4d026a708c7c438e4ca5de873dca0cb3f1] <==
	* 2021-08-14 09:39:25.123849 I | embed: listening for metrics on http://192.168.58.2:2381
	2021-08-14 09:39:25.123922 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-14 09:39:26.015729 I | raft: b2c6679ac05f2cf1 is starting a new election at term 1
	2021-08-14 09:39:26.015767 I | raft: b2c6679ac05f2cf1 became candidate at term 2
	2021-08-14 09:39:26.015784 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	2021-08-14 09:39:26.015799 I | raft: b2c6679ac05f2cf1 became leader at term 2
	2021-08-14 09:39:26.015804 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-14 09:39:26.015988 I | etcdserver: setting up the initial cluster version to 3.3
	2021-08-14 09:39:26.016773 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-14 09:39:26.016876 I | etcdserver: published {Name:old-k8s-version-20210814093902-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:39:26.017194 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-14 09:39:26.017364 I | embed: ready to serve client requests
	2021-08-14 09:39:26.017851 I | embed: ready to serve client requests
	2021-08-14 09:39:26.020056 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:39:26.020441 I | embed: serving client requests on 192.168.58.2:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-14 09:39:38.868421 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (2.062486615s) to execute
	2021-08-14 09:39:38.868548 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20210814093902-6746.169b22cf5d25c938\" " with result "range_response_count:1 size:533" took too long (2.441706977s) to execute
	2021-08-14 09:39:41.922040 W | wal: sync duration of 2.4843694s, expected less than 1s
	2021-08-14 09:39:42.917217 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (696.154229ms) to execute
	2021-08-14 09:39:42.917239 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (3.110932255s) to execute
	2021-08-14 09:39:42.917461 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (761.948723ms) to execute
	2021-08-14 09:39:42.917548 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20210814093902-6746.169b22cf5d25c938\" " with result "range_response_count:1 size:533" took too long (4.045035799s) to execute
	2021-08-14 09:39:42.917689 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210814093902-6746\" " with result "range_response_count:1 size:307" took too long (3.909537619s) to execute
	
	* 
	* ==> etcd [4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74] <==
	* 2021-08-14 09:42:03.811743 I | etcdserver: advertise client URLs = https://192.168.58.2:2379
	2021-08-14 09:42:03.816058 I | etcdserver: restarting member b2c6679ac05f2cf1 in cluster 3a56e4ca95e2355c at commit index 545
	2021-08-14 09:42:03.816122 I | raft: b2c6679ac05f2cf1 became follower at term 2
	2021-08-14 09:42:03.816136 I | raft: newRaft b2c6679ac05f2cf1 [peers: [], term: 2, commit: 545, applied: 0, lastindex: 545, lastterm: 2]
	2021-08-14 09:42:03.823989 W | auth: simple token is not cryptographically signed
	2021-08-14 09:42:03.826248 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-14 09:42:03.828710 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-14 09:42:03.828847 I | embed: listening for metrics on http://192.168.58.2:2381
	2021-08-14 09:42:03.829041 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-14 09:42:03.829441 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-14 09:42:03.829568 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-14 09:42:03.829598 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-14 09:42:05.216475 I | raft: b2c6679ac05f2cf1 is starting a new election at term 2
	2021-08-14 09:42:05.216530 I | raft: b2c6679ac05f2cf1 became candidate at term 3
	2021-08-14 09:42:05.216566 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3
	2021-08-14 09:42:05.216584 I | raft: b2c6679ac05f2cf1 became leader at term 3
	2021-08-14 09:42:05.216600 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3
	2021-08-14 09:42:05.216942 I | etcdserver: published {Name:old-k8s-version-20210814093902-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:42:05.217042 I | embed: ready to serve client requests
	2021-08-14 09:42:05.217319 I | embed: ready to serve client requests
	2021-08-14 09:42:05.219093 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:42:05.219231 I | embed: serving client requests on 192.168.58.2:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-14 09:42:59.517996 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-nfccv\" " with result "range_response_count:1 size:1990" took too long (470.987491ms) to execute
	
	* 
	* ==> kernel <==
	*  09:43:19 up  1:26,  0 users,  load average: 3.64, 2.72, 2.02
	Linux old-k8s-version-20210814093902-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [012cb6d80c0ca231cf5aed243ba1167ad4ad7002149ac53e658d8d6294c43603] <==
	* I0814 09:41:04.855672       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:05.855791       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:05.855964       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:06.856146       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:06.856295       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:07.856449       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:07.856588       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:08.856693       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:08.856885       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:09.857031       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:09.857185       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:10.857320       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:10.857470       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:11.857630       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:11.857759       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:12.857917       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:12.858028       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:13.858201       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:13.858367       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:14.858534       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:14.858666       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:15.858813       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:15.858940       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:16.859112       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:16.859274       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-apiserver [da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3] <==
	* I0814 09:43:09.744528       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:09.744694       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:10.728207       1 controller.go:102] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0814 09:43:10.728292       1 handler_proxy.go:89] no RequestInfo found in the context
	E0814 09:43:10.728369       1 controller.go:108] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0814 09:43:10.728380       1 controller.go:121] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 09:43:10.744779       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:10.744944       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:11.745096       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:11.745210       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:12.745345       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:12.745456       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:13.745616       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:13.745748       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:14.745937       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:14.746082       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:15.746229       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:15.746359       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:16.746530       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:16.746613       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:17.746779       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:17.746906       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:18.747079       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:18.747216       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010] <==
	* I0814 09:42:25.004929       1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
	I0814 09:42:25.009775       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"c5d9fc4c-fce3-11eb-977c-0242f298e734", APIVersion:"apps/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-wbkxs
	I0814 09:42:25.083106       1 controller_utils.go:1034] Caches are synced for deployment controller
	I0814 09:42:25.087411       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"e8aafb81-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-5b494cc544 to 1
	I0814 09:42:25.087699       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"e8abe90a-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	I0814 09:42:25.089617       1 controller_utils.go:1034] Caches are synced for disruption controller
	I0814 09:42:25.089781       1 disruption.go:294] Sending events to api server.
	I0814 09:42:25.093541       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"ee4ec96c-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-7z4lv
	I0814 09:42:25.093974       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"ee4ec420-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-q7m9p
	I0814 09:42:25.119029       1 controller_utils.go:1034] Caches are synced for ReplicationController controller
	I0814 09:42:25.187352       1 controller_utils.go:1034] Caches are synced for expand controller
	I0814 09:42:25.191437       1 controller_utils.go:1034] Caches are synced for PV protection controller
	I0814 09:42:25.213162       1 controller_utils.go:1034] Caches are synced for attach detach controller
	I0814 09:42:25.237395       1 controller_utils.go:1034] Caches are synced for persistent volume controller
	I0814 09:42:25.299360       1 controller_utils.go:1034] Caches are synced for HPA controller
	I0814 09:42:25.544267       1 controller_utils.go:1034] Caches are synced for resource quota controller
	I0814 09:42:25.579594       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	I0814 09:42:25.579614       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0814 09:42:26.202521       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0814 09:42:26.202723       1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
	W0814 09:42:27.302021       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0814 09:42:27.302313       1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
	I0814 09:42:27.402535       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	E0814 09:42:56.454784       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:42:59.404021       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab] <==
	* W0814 09:42:09.644431       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0814 09:42:09.717284       1 server_others.go:148] Using iptables Proxier.
	I0814 09:42:09.717450       1 server_others.go:178] Tearing down inactive rules.
	I0814 09:42:10.337127       1 server.go:555] Version: v1.14.0
	I0814 09:42:10.341304       1 config.go:202] Starting service config controller
	I0814 09:42:10.341320       1 config.go:102] Starting endpoints config controller
	I0814 09:42:10.341336       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0814 09:42:10.341337       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0814 09:42:10.441492       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0814 09:42:10.441493       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-proxy [66a9ce36611591121dee71fec40dd87cd51874561a51962cc10ca295869204e7] <==
	* W0814 09:39:49.324608       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0814 09:39:49.333214       1 server_others.go:148] Using iptables Proxier.
	I0814 09:39:49.333413       1 server_others.go:178] Tearing down inactive rules.
	I0814 09:39:49.909071       1 server.go:555] Version: v1.14.0
	I0814 09:39:49.914544       1 config.go:102] Starting endpoints config controller
	I0814 09:39:49.914532       1 config.go:202] Starting service config controller
	I0814 09:39:49.914916       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0814 09:39:49.914962       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0814 09:39:50.015172       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0814 09:39:50.015172       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88] <==
	* I0814 09:42:04.413940       1 serving.go:319] Generated self-signed cert in-memory
	W0814 09:42:04.924362       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	W0814 09:42:04.924389       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	W0814 09:42:04.924404       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	I0814 09:42:04.927303       1 server.go:142] Version: v1.14.0
	I0814 09:42:04.927795       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0814 09:42:04.933177       1 authorization.go:47] Authorization is disabled
	W0814 09:42:04.933606       1 authentication.go:55] Authentication is disabled
	I0814 09:42:04.933629       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0814 09:42:04.934320       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	I0814 09:42:09.501703       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0814 09:42:09.601910       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kube-scheduler [495e84cfb1834bce1069b2626727d804342cc869f3d557981840648e736172d4] <==
	* W0814 09:39:26.245994       1 authentication.go:55] Authentication is disabled
	I0814 09:39:26.246002       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0814 09:39:26.246418       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0814 09:39:28.905631       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:39:28.913339       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:39:28.913406       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:39:28.914993       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:39:28.914994       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:39:28.915082       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:39:28.915167       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:39:28.916857       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:39:28.916914       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:39:28.917041       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:39:29.906735       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:39:29.914403       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:39:29.915566       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:39:29.916721       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:39:29.917777       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:39:29.918943       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:39:29.919958       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:39:29.921046       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:39:29.922215       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:39:29.923309       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0814 09:39:31.802103       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0814 09:39:31.902248       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:41:39 UTC, end at Sat 2021-08-14 09:43:19 UTC. --
	Aug 14 09:42:32 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:32.715912     687 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457405     687 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457459     687 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457524     687 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457567     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 14 09:42:40 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:40.631594     687 pod_workers.go:190] Error syncing pod 90c2ae3f-fce3-11eb-977c-0242f298e734 ("coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"
	Aug 14 09:42:40 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:40.633340     687 pod_workers.go:190] Error syncing pod 91a7567d-fce3-11eb-977c-0242f298e734 ("storage-provisioner_kube-system(91a7567d-fce3-11eb-977c-0242f298e734)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(91a7567d-fce3-11eb-977c-0242f298e734)"
	Aug 14 09:42:42 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:42.765968     687 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Aug 14 09:42:42 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:42.766007     687 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Aug 14 09:42:43 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:43.427477     687 pod_workers.go:190] Error syncing pod 90c2ae3f-fce3-11eb-977c-0242f298e734 ("coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"
	Aug 14 09:42:45 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:45.646222     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:42:48 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:48.415732     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:42:51 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:51.349515     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:42:52 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:52.815112     687 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Aug 14 09:42:52 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:52.815155     687 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.432959     687 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.433000     687 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.433061     687 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.433095     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:02.415424     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:43:12 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:12.415995     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:16.701316     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf] <==
	* 2021/08/14 09:42:25 Using namespace: kubernetes-dashboard
	2021/08/14 09:42:25 Using in-cluster config to connect to apiserver
	2021/08/14 09:42:25 Using secret token for csrf signing
	2021/08/14 09:42:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:42:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:42:25 Successful initial request to the apiserver, version: v1.14.0
	2021/08/14 09:42:25 Generating JWE encryption key
	2021/08/14 09:42:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:42:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:42:26 Initializing JWE encryption key from synchronized object
	2021/08/14 09:42:26 Creating in-cluster Sidecar client
	2021/08/14 09:42:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:42:26 Serving insecurely on HTTP port: 9090
	2021/08/14 09:42:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:42:25 Starting overwatch
	
	* 
	* ==> storage-provisioner [70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586] <==
	* I0814 09:42:56.634465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:42:56.642438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:42:56.642498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:43:14.037604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:43:14.037736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210814093902-6746_3b73ab7f-bf47-4d39-ad89-9a8c3826e6d0!
	I0814 09:43:14.037739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91a5ebe0-fce3-11eb-977c-0242f298e734", APIVersion:"v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210814093902-6746_3b73ab7f-bf47-4d39-ad89-9a8c3826e6d0 became leader
	I0814 09:43:14.137941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210814093902-6746_3b73ab7f-bf47-4d39-ad89-9a8c3826e6d0!
	
	* 
	* ==> storage-provisioner [82bbbe4ef766ce7a77beb6a35c1a1d7d974312fb0d790b588286c07ecfe223c1] <==
	* I0814 09:42:09.742564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0814 09:42:39.744422       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746: exit status 2 (303.607678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-wbkxs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 describe pod metrics-server-8546d8b77b-wbkxs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210814093902-6746 describe pod metrics-server-8546d8b77b-wbkxs: exit status 1 (60.680334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-wbkxs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210814093902-6746 describe pod metrics-server-8546d8b77b-wbkxs: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210814093902-6746
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210814093902-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d",
	        "Created": "2021-08-14T09:39:04.181006779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:41:39.607314274Z",
	            "FinishedAt": "2021-08-14T09:41:37.768809378Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/hostname",
	        "HostsPath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/hosts",
	        "LogPath": "/var/lib/docker/containers/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d/58773ee473db536a928f8f65990ab93ef9933501cc8029eeca5f713c37d5c30d-json.log",
	        "Name": "/old-k8s-version-20210814093902-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210814093902-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210814093902-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a5bc5cd3fe760ccd8f86c757457378902f2c7ed593eb34674234e2c149e8f5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210814093902-6746",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210814093902-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210814093902-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210814093902-6746",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210814093902-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0429f21e8f7b461ee1c8edfd3572a341b6c3ea8376909fc0ce2d8f26a6a3d50c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32932"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32929"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32931"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32930"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0429f21e8f7b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210814093902-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "58773ee473db"
	                    ],
	                    "NetworkID": "978faa9d778a689fe2bae1f7ad4f3c1e866ffce513602e1412ddcf32d9deb6cb",
	                    "EndpointID": "61d62faeece6c26082735ebd2b0af7e05b0601241dcaa15c49a8ec422abe2773",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746: exit status 2 (304.980014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210814093902-6746 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | force-systemd-flag-20210814093636-6746            | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:25 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                        |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210814093636-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:25 UTC | Sat, 14 Aug 2021 09:37:28 UTC |
	|         | force-systemd-flag-20210814093636-6746            |                                        |         |         |                               |                               |
	| start   | -p                                                | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:37:28 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | force-systemd-env-20210814093728-6746             |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | -v=5 --driver=docker                              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210814093728-6746             | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:12 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                        |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-env-20210814093728-6746  | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:12 UTC | Sat, 14 Aug 2021 09:38:15 UTC |
	|         | force-systemd-env-20210814093728-6746             |                                        |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:15 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | cert-options-20210814093815-6746                  |                                        |         |         |                               |                               |
	|         | --memory=2048                                     |                                        |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                        |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                        |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                        |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                        |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	| -p      | cert-options-20210814093815-6746                  | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                        |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                        |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210814093815-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:39:02 UTC |
	|         | cert-options-20210814093815-6746                  |                                        |         |         |                               |                               |
	| unpause | -p pause-20210814093545-6746                      | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:48 UTC | Sat, 14 Aug 2021 09:40:48 UTC |
	|         | --alsologtostderr -v=5                            |                                        |         |         |                               |                               |
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:54 UTC | Sat, 14 Aug 2021 09:41:01 UTC |
	|         | -n 25                                             |                                        |         |         |                               |                               |
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:02 UTC | Sat, 14 Aug 2021 09:41:03 UTC |
	|         | -n 25                                             |                                        |         |         |                               |                               |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:03 UTC | Sat, 14 Aug 2021 09:41:06 UTC |
	|         | --alsologtostderr -v=5                            |                                        |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:39:02 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                        |         |         |                               |                               |
	|         | --keep-context=false                              |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	| profile | list --output json                                | minikube                               | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:06 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746              | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:07 UTC | Sat, 14 Aug 2021 09:41:08 UTC |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:16 UTC | Sat, 14 Aug 2021 09:41:17 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:17 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                        |         |         |                               |                               |
	|         | --keep-context=false                              |                                        |         |         |                               |                               |
	|         | --driver=docker                                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                        |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746         | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                        |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                        |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                        |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:18 UTC | Sat, 14 Aug 2021 09:43:19 UTC |
	|         | logs -n 25                                        |                                        |         |         |                               |                               |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:43:10
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:43:10.295339  198227 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:43:10.295435  198227 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:10.295439  198227 out.go:311] Setting ErrFile to fd 2...
	I0814 09:43:10.295442  198227 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:10.295542  198227 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:43:10.295778  198227 out.go:305] Setting JSON to false
	I0814 09:43:10.332745  198227 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5153,"bootTime":1628929038,"procs":263,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:43:10.332881  198227 start.go:121] virtualization: kvm guest
	I0814 09:43:10.335219  198227 out.go:177] * [no-preload-20210814094108-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:43:10.336630  198227 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:43:10.335357  198227 notify.go:169] Checking for updates...
	I0814 09:43:10.338003  198227 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:43:10.339370  198227 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:43:10.340650  198227 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:43:10.341069  198227 config.go:177] Loaded profile config "no-preload-20210814094108-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:43:10.341459  198227 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:43:10.388220  198227 docker.go:132] docker version: linux-19.03.15
	I0814 09:43:10.388296  198227 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:43:10.466688  198227 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:43:10.423254246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:43:10.466804  198227 docker.go:244] overlay module found
	I0814 09:43:10.468774  198227 out.go:177] * Using the docker driver based on existing profile
	I0814 09:43:10.468807  198227 start.go:278] selected driver: docker
	I0814 09:43:10.468814  198227 start.go:751] validating driver "docker" against &{Name:no-preload-20210814094108-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210814094108-6746 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTime
out:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:43:10.468930  198227 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:43:10.468971  198227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:43:10.468987  198227 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:43:10.470385  198227 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:43:10.471217  198227 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:43:10.547707  198227 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:43:10.505716548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0814 09:43:10.547830  198227 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:43:10.547863  198227 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:43:10.549708  198227 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:43:10.549804  198227 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:43:10.549830  198227 cni.go:93] Creating CNI manager for ""
	I0814 09:43:10.549837  198227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:43:10.549850  198227 start_flags.go:277] config:
	{Name:no-preload-20210814094108-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210814094108-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRe
quested:false ExtraDisks:0}
	I0814 09:43:10.551692  198227 out.go:177] * Starting control plane node no-preload-20210814094108-6746 in cluster no-preload-20210814094108-6746
	I0814 09:43:10.551725  198227 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:43:10.552935  198227 out.go:177] * Pulling base image ...
	I0814 09:43:10.552969  198227 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:43:10.553055  198227 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:43:10.553086  198227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/config.json ...
	I0814 09:43:10.553290  198227 cache.go:108] acquiring lock: {Name:mk45577cc3748bb07affaae091a26e8410047cac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553291  198227 cache.go:108] acquiring lock: {Name:mk4723e7eabe6689e250edc786d48af6de99ffbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553291  198227 cache.go:108] acquiring lock: {Name:mk4b87712df5985ae10899cad089779def4ce8b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553384  198227 cache.go:108] acquiring lock: {Name:mk7018cedaea6dcae7ca085fae097c7ae1351038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553384  198227 cache.go:108] acquiring lock: {Name:mk5d8f79fe96efa08c9b364c312d80509c3c09c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553415  198227 cache.go:108] acquiring lock: {Name:mk89f9a4f4f0278092d93ef1c75e49ac69a8b3d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553425  198227 cache.go:108] acquiring lock: {Name:mkf39f81098acf22603e4dcac428e043084b67f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553462  198227 cache.go:108] acquiring lock: {Name:mkb632bb0773648f7fd3acb464d108826a0e8e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553487  198227 cache.go:108] acquiring lock: {Name:mk2b036cb72c961bb5a0d34fa35818f29318d0a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553515  198227 cache.go:108] acquiring lock: {Name:mk6842265b6ce64ef0fb74765284422364c16edd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.553537  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0814 09:43:10.553550  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0814 09:43:10.553565  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0814 09:43:10.553563  198227 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 189.925µs
	I0814 09:43:10.553569  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0814 09:43:10.553582  198227 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553544  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0814 09:43:10.553586  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0814 09:43:10.553589  198227 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 308.56µs
	I0814 09:43:10.553588  198227 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 245.165µs
	I0814 09:43:10.553591  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0814 09:43:10.553522  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0814 09:43:10.553599  198227 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 320.855µs
	I0814 09:43:10.553616  198227 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0814 09:43:10.553610  198227 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553608  198227 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 127.046µs
	I0814 09:43:10.553610  198227 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 151.324µs
	I0814 09:43:10.553628  198227 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0814 09:43:10.553625  198227 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 345.434µs
	I0814 09:43:10.553621  198227 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 109.417µs
	I0814 09:43:10.553638  198227 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553616  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0814 09:43:10.553641  198227 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0814 09:43:10.553647  198227 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0814 09:43:10.553658  198227 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 258.034µs
	I0814 09:43:10.553663  198227 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 311.466µs
	I0814 09:43:10.553682  198227 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0814 09:43:10.553633  198227 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0814 09:43:10.553602  198227 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0814 09:43:10.553670  198227 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0814 09:43:10.553700  198227 cache.go:88] Successfully saved all images to host disk.
	I0814 09:43:10.627140  198227 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:43:10.627168  198227 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:43:10.627185  198227 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:43:10.627216  198227 start.go:313] acquiring machines lock for no-preload-20210814094108-6746: {Name:mkedefaa2332f31f505548533b13d397c9430bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:10.627289  198227 start.go:317] acquired machines lock for "no-preload-20210814094108-6746" in 56.323µs
	I0814 09:43:10.627306  198227 start.go:93] Skipping create...Using existing machine configuration
	I0814 09:43:10.627310  198227 fix.go:55] fixHost starting: 
	I0814 09:43:10.627544  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:43:10.664929  198227 fix.go:108] recreateIfNeeded on no-preload-20210814094108-6746: state=Stopped err=<nil>
	W0814 09:43:10.664977  198227 fix.go:134] unexpected machine state, will restart: <nil>
	I0814 09:43:10.667151  198227 out.go:177] * Restarting existing docker container for "no-preload-20210814094108-6746" ...
	I0814 09:43:10.667225  198227 cli_runner.go:115] Run: docker start no-preload-20210814094108-6746
	I0814 09:43:11.910546  198227 cli_runner.go:168] Completed: docker start no-preload-20210814094108-6746: (1.243289941s)
	I0814 09:43:11.910624  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:43:11.954979  198227 kic.go:420] container "no-preload-20210814094108-6746" state is running.
	I0814 09:43:11.955476  198227 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210814094108-6746
	I0814 09:43:12.006981  198227 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/config.json ...
	I0814 09:43:12.007177  198227 machine.go:88] provisioning docker machine ...
	I0814 09:43:12.007201  198227 ubuntu.go:169] provisioning hostname "no-preload-20210814094108-6746"
	I0814 09:43:12.007262  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:12.048149  198227 main.go:130] libmachine: Using SSH client type: native
	I0814 09:43:12.048384  198227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0814 09:43:12.048401  198227 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210814094108-6746 && echo "no-preload-20210814094108-6746" | sudo tee /etc/hostname
	I0814 09:43:12.048928  198227 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36356->127.0.0.1:32938: read: connection reset by peer
	I0814 09:43:15.191850  198227 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210814094108-6746
	
	I0814 09:43:15.191919  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:15.233326  198227 main.go:130] libmachine: Using SSH client type: native
	I0814 09:43:15.233477  198227 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0814 09:43:15.233496  198227 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210814094108-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210814094108-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210814094108-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:43:15.356002  198227 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:43:15.356035  198227 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:43:15.356052  198227 ubuntu.go:177] setting up certificates
	I0814 09:43:15.356061  198227 provision.go:83] configureAuth start
	I0814 09:43:15.356106  198227 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210814094108-6746
	I0814 09:43:15.393583  198227 provision.go:138] copyHostCerts
	I0814 09:43:15.393637  198227 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:43:15.393647  198227 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:43:15.393702  198227 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:43:15.393771  198227 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:43:15.393784  198227 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:43:15.393806  198227 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:43:15.393855  198227 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:43:15.393862  198227 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:43:15.393879  198227 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:43:15.393918  198227 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210814094108-6746 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210814094108-6746]
	I0814 09:43:15.593141  198227 provision.go:172] copyRemoteCerts
	I0814 09:43:15.593191  198227 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:43:15.593224  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:15.631390  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:43:15.719619  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:43:15.734896  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0814 09:43:15.749924  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:43:15.764640  198227 provision.go:86] duration metric: configureAuth took 408.569026ms
	I0814 09:43:15.764659  198227 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:43:15.764828  198227 config.go:177] Loaded profile config "no-preload-20210814094108-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:43:15.764842  198227 machine.go:91] provisioned docker machine in 3.757649654s
	I0814 09:43:15.764851  198227 start.go:267] post-start starting for "no-preload-20210814094108-6746" (driver="docker")
	I0814 09:43:15.764857  198227 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:43:15.764894  198227 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:43:15.764926  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:15.803668  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:43:15.895816  198227 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:43:15.898640  198227 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:43:15.898671  198227 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:43:15.898691  198227 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:43:15.898698  198227 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:43:15.898717  198227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:43:15.898766  198227 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:43:15.898877  198227 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:43:15.898969  198227 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:43:15.905547  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:43:15.921387  198227 start.go:270] post-start completed in 156.524999ms
	I0814 09:43:15.921437  198227 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:43:15.921468  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:15.967477  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:43:16.052629  198227 fix.go:57] fixHost completed within 5.425313072s
	I0814 09:43:16.052658  198227 start.go:80] releasing machines lock for "no-preload-20210814094108-6746", held for 5.425358197s
	I0814 09:43:16.052778  198227 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210814094108-6746
	I0814 09:43:16.095519  198227 ssh_runner.go:149] Run: systemctl --version
	I0814 09:43:16.095572  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:16.095590  198227 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:43:16.095649  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:43:16.142189  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:43:16.143482  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:43:16.311294  198227 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:43:16.323507  198227 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:43:16.332294  198227 docker.go:153] disabling docker service ...
	I0814 09:43:16.332347  198227 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:43:16.341388  198227 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:43:16.349474  198227 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:43:16.410426  198227 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:43:16.474560  198227 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:43:16.483084  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:43:16.495540  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:43:16.509098  198227 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:43:16.517373  198227 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:43:16.517454  198227 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:43:16.524646  198227 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:43:16.531005  198227 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:43:16.591159  198227 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:43:16.670501  198227 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:43:16.670567  198227 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:43:16.674111  198227 start.go:413] Will wait 60s for crictl version
	I0814 09:43:16.674175  198227 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:43:16.697662  198227 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:43:16Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	f62389f4ed462       523cad1a4df73       4 seconds ago        Exited              dashboard-metrics-scraper   3                   f693d0e825b58
	70a2684b60ecd       6e38f40d628db       24 seconds ago       Running             storage-provisioner         2                   a773999086895
	0a1774cc304d5       eb516548c180f       24 seconds ago       Running             coredns                     2                   98da8496f70fa
	627ec2ed6d607       9a07b5b4bfac0       55 seconds ago       Running             kubernetes-dashboard        0                   f018a356df108
	2b23c799a1063       56cc512116c8f       About a minute ago   Running             busybox                     1                   1a4b21d4ab645
	330312560468f       eb516548c180f       About a minute ago   Exited              coredns                     1                   98da8496f70fa
	6f7165ba4ba52       6de166512aa22       About a minute ago   Running             kindnet-cni                 1                   57f21be6d02bb
	82bbbe4ef766c       6e38f40d628db       About a minute ago   Exited              storage-provisioner         1                   a773999086895
	10ba05c467b3a       5cd54e388abaf       About a minute ago   Running             kube-proxy                  1                   40fe49db55780
	da2819ec20ab5       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              1                   241b264faab2c
	4a5588c5ee462       2c4adeb21b4ff       About a minute ago   Running             etcd                        1                   3af71ccfabc80
	18664f66f1c4c       00638a24688b0       About a minute ago   Running             kube-scheduler              1                   6f3c7a964584e
	c68b5e7c89b1d       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   0d451db486447
	c81024767969c       56cc512116c8f       2 minutes ago        Exited              busybox                     0                   cef340ae68014
	6480fa93026a8       6de166512aa22       3 minutes ago        Exited              kindnet-cni                 0                   5962da6023694
	66a9ce3661159       5cd54e388abaf       3 minutes ago        Exited              kube-proxy                  0                   1658fa912407e
	3bc1ef69b5579       2c4adeb21b4ff       3 minutes ago        Exited              etcd                        0                   bb6248b4cbda4
	495e84cfb1834       00638a24688b0       3 minutes ago        Exited              kube-scheduler              0                   2ae5c9fdc7988
	012cb6d80c0ca       ecf910f40d6e0       3 minutes ago        Exited              kube-apiserver              0                   37ea6089989ff
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:41:39 UTC, end at Sat 2021-08-14 09:43:20 UTC. --
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.443338522Z" level=info msg="TearDown network for sandbox \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\" successfully"
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.443353853Z" level=info msg="StopPodSandbox for \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\" returns successfully"
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.443552832Z" level=info msg="RemovePodSandbox for \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\""
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:02.447221763Z" level=info msg="RemovePodSandbox \"ed2c86fb2c38ea95be0ba0316b4cf19c7b72430caeeface5355ff784fa736ded\" returns successfully"
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.825096510Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.902190352Z" level=info msg="Finish piping \"stderr\" of container exec \"b6b766c74b2ab40b7e46a655648f95f52532f0044b827ce97947549ce61d8ebc\""
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.902194357Z" level=info msg="Finish piping \"stdout\" of container exec \"b6b766c74b2ab40b7e46a655648f95f52532f0044b827ce97947549ce61d8ebc\""
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.902275127Z" level=info msg="Exec process \"b6b766c74b2ab40b7e46a655648f95f52532f0044b827ce97947549ce61d8ebc\" exits with exit code 0 and error <nil>"
	Aug 14 09:43:04 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:04.903417602Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" returns with exit code 0"
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.825156747Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.907383716Z" level=info msg="Finish piping \"stderr\" of container exec \"ecf7606cda012a59b6438d32d5f8f4a3bccebc838576e7450b8b4698fb9ac98a\""
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.907383691Z" level=info msg="Finish piping \"stdout\" of container exec \"ecf7606cda012a59b6438d32d5f8f4a3bccebc838576e7450b8b4698fb9ac98a\""
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.907575915Z" level=info msg="Exec process \"ecf7606cda012a59b6438d32d5f8f4a3bccebc838576e7450b8b4698fb9ac98a\" exits with exit code 0 and error <nil>"
	Aug 14 09:43:14 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:14.908736770Z" level=info msg="ExecSync for \"4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74\" returns with exit code 0"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.417038754Z" level=info msg="CreateContainer within sandbox \"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,}"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.441904009Z" level=info msg="CreateContainer within sandbox \"f693d0e825b58bf2f45419bfe2440689f16078e68d8873fe3c021c1d0e982e0a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.442263004Z" level=info msg="StartContainer for \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.592420647Z" level=info msg="StartContainer for \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\" returns successfully"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.621211117Z" level=info msg="Finish piping stdout of container \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.621268095Z" level=info msg="Finish piping stderr of container \"f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.621992529Z" level=info msg="TaskExit event &TaskExit{ContainerID:f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0,ID:f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0,Pid:3076,ExitStatus:1,ExitedAt:2021-08-14 09:43:16.621740512 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.677535962Z" level=info msg="shim disconnected" id=f62389f4ed46263334255eb576bf41645118fa3554bf8cfcec4e5ee09dda97e0
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.677627706Z" level=error msg="copy shim log" error="read /proc/self/fd/152: file already closed"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.701953368Z" level=info msg="RemoveContainer for \"4fde8df1c8495a1c75ba07dbc26af23d1ac80b19740c938c7f22d070b735feed\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 containerd[336]: time="2021-08-14T09:43:16.708085183Z" level=info msg="RemoveContainer for \"4fde8df1c8495a1c75ba07dbc26af23d1ac80b19740c938c7f22d070b735feed\" returns successfully"
	
	* 
	* ==> coredns [0a1774cc304d55d5c9059ea913cbf8536a60eb223f4f7a67ad5d9f28a67d1607] <==
	* .:53
	2021-08-14T09:42:56.743Z [INFO] CoreDNS-1.3.1
	2021-08-14T09:42:56.743Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-14T09:42:56.743Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	
	* 
	* ==> coredns [330312560468f89bccfc3819edc3570f829561ec6f0f09fa8aa01c0a72a5daf0] <==
	* .:53
	2021-08-14T09:42:15.002Z [INFO] CoreDNS-1.3.1
	2021-08-14T09:42:15.002Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-14T09:42:15.002Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	E0814 09:42:40.003024       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0814 09:42:40.003024       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-nfccv.unknownuser.log.ERROR.20210814-094240.1: no such file or directory
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210814093902-6746
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210814093902-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=old-k8s-version-20210814093902-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_39_34_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:39:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:42:38 +0000   Sat, 14 Aug 2021 09:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20210814093902-6746
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951368Ki
	 pods:               110
	System Info:
	 Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	 System UUID:                932371fb-7e85-41f3-aeda-17115a947456
	 Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	 Kernel Version:             4.9.0-16-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.9
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (12 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                coredns-fb8b8dccf-nfccv                                        100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m33s
	  kube-system                etcd-old-k8s-version-20210814093902-6746                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                kindnet-9rbws                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m33s
	  kube-system                kube-apiserver-old-k8s-version-20210814093902-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                kube-controller-manager-old-k8s-version-20210814093902-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                kube-proxy-xnmq2                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                kube-scheduler-old-k8s-version-20210814093902-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                metrics-server-8546d8b77b-wbkxs                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         56s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-7z4lv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-q7m9p                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  Starting                 3m57s                  kubelet, old-k8s-version-20210814093902-6746     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet, old-k8s-version-20210814093902-6746     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m32s                  kube-proxy, old-k8s-version-20210814093902-6746  Starting kube-proxy.
	  Normal  Starting                 79s                    kubelet, old-k8s-version-20210814093902-6746     Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)      kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x7 over 79s)      kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x8 over 79s)      kubelet, old-k8s-version-20210814093902-6746     Node old-k8s-version-20210814093902-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                    kubelet, old-k8s-version-20210814093902-6746     Updated Node Allocatable limit across pods
	  Normal  Starting                 71s                    kube-proxy, old-k8s-version-20210814093902-6746  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000001] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000000] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.004033] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000002] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +8.084358] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth95211dff
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce ce 48 d0 08 39 08 06        ........H..9..
	[  +0.103008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000003] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.000001] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-978faa9d778a
	[  +0.000001] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.020470] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth1aaa4059
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 6c 08 49 4a d1 08 06        ......2l.IJ...
	[  +0.000256] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth7067e1ac
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e 8d d5 44 9e 30 08 06        .........D.0..
	[ +11.959520] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfa4c84cf
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 55 0e 07 67 26 08 06        .......U..g&..
	[  +3.495552] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethc578d1e4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 06 df f9 78 a3 27 08 06        .........x.'..
	[  +8.611512] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth2cdba4ed
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 1c e7 62 8b 4d 08 06        .........b.M..
	[Aug14 09:43] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [3bc1ef69b5579728cea73b69d76eae4d026a708c7c438e4ca5de873dca0cb3f1] <==
	* 2021-08-14 09:39:25.123849 I | embed: listening for metrics on http://192.168.58.2:2381
	2021-08-14 09:39:25.123922 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-14 09:39:26.015729 I | raft: b2c6679ac05f2cf1 is starting a new election at term 1
	2021-08-14 09:39:26.015767 I | raft: b2c6679ac05f2cf1 became candidate at term 2
	2021-08-14 09:39:26.015784 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	2021-08-14 09:39:26.015799 I | raft: b2c6679ac05f2cf1 became leader at term 2
	2021-08-14 09:39:26.015804 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-14 09:39:26.015988 I | etcdserver: setting up the initial cluster version to 3.3
	2021-08-14 09:39:26.016773 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-14 09:39:26.016876 I | etcdserver: published {Name:old-k8s-version-20210814093902-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:39:26.017194 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-14 09:39:26.017364 I | embed: ready to serve client requests
	2021-08-14 09:39:26.017851 I | embed: ready to serve client requests
	2021-08-14 09:39:26.020056 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:39:26.020441 I | embed: serving client requests on 192.168.58.2:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-14 09:39:38.868421 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (2.062486615s) to execute
	2021-08-14 09:39:38.868548 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20210814093902-6746.169b22cf5d25c938\" " with result "range_response_count:1 size:533" took too long (2.441706977s) to execute
	2021-08-14 09:39:41.922040 W | wal: sync duration of 2.4843694s, expected less than 1s
	2021-08-14 09:39:42.917217 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (696.154229ms) to execute
	2021-08-14 09:39:42.917239 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (3.110932255s) to execute
	2021-08-14 09:39:42.917461 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (761.948723ms) to execute
	2021-08-14 09:39:42.917548 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20210814093902-6746.169b22cf5d25c938\" " with result "range_response_count:1 size:533" took too long (4.045035799s) to execute
	2021-08-14 09:39:42.917689 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210814093902-6746\" " with result "range_response_count:1 size:307" took too long (3.909537619s) to execute
	
	* 
	* ==> etcd [4a5588c5ee4620a38d0c4486820a047f7323dce681ec9bbe24c14c6142b0da74] <==
	* 2021-08-14 09:42:03.811743 I | etcdserver: advertise client URLs = https://192.168.58.2:2379
	2021-08-14 09:42:03.816058 I | etcdserver: restarting member b2c6679ac05f2cf1 in cluster 3a56e4ca95e2355c at commit index 545
	2021-08-14 09:42:03.816122 I | raft: b2c6679ac05f2cf1 became follower at term 2
	2021-08-14 09:42:03.816136 I | raft: newRaft b2c6679ac05f2cf1 [peers: [], term: 2, commit: 545, applied: 0, lastindex: 545, lastterm: 2]
	2021-08-14 09:42:03.823989 W | auth: simple token is not cryptographically signed
	2021-08-14 09:42:03.826248 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-14 09:42:03.828710 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-14 09:42:03.828847 I | embed: listening for metrics on http://192.168.58.2:2381
	2021-08-14 09:42:03.829041 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-14 09:42:03.829441 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-14 09:42:03.829568 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-14 09:42:03.829598 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-14 09:42:05.216475 I | raft: b2c6679ac05f2cf1 is starting a new election at term 2
	2021-08-14 09:42:05.216530 I | raft: b2c6679ac05f2cf1 became candidate at term 3
	2021-08-14 09:42:05.216566 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3
	2021-08-14 09:42:05.216584 I | raft: b2c6679ac05f2cf1 became leader at term 3
	2021-08-14 09:42:05.216600 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3
	2021-08-14 09:42:05.216942 I | etcdserver: published {Name:old-k8s-version-20210814093902-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:42:05.217042 I | embed: ready to serve client requests
	2021-08-14 09:42:05.217319 I | embed: ready to serve client requests
	2021-08-14 09:42:05.219093 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:42:05.219231 I | embed: serving client requests on 192.168.58.2:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-14 09:42:59.517996 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-nfccv\" " with result "range_response_count:1 size:1990" took too long (470.987491ms) to execute
	
	* 
	* ==> kernel <==
	*  09:43:21 up  1:26,  0 users,  load average: 3.64, 2.72, 2.02
	Linux old-k8s-version-20210814093902-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [012cb6d80c0ca231cf5aed243ba1167ad4ad7002149ac53e658d8d6294c43603] <==
	* I0814 09:41:04.855672       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:05.855791       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:05.855964       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:06.856146       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:06.856295       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:07.856449       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:07.856588       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:08.856693       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:08.856885       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:09.857031       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:09.857185       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:10.857320       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:10.857470       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:11.857630       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:11.857759       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:12.857917       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:12.858028       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:13.858201       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:13.858367       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:14.858534       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:14.858666       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:15.858813       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:15.858940       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:41:16.859112       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:41:16.859274       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-apiserver [da2819ec20ab572f1877c21950f118c67b1382ebf4ddd5ad879674629ed4b8e3] <==
	* E0814 09:43:10.728369       1 controller.go:108] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0814 09:43:10.728380       1 controller.go:121] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 09:43:10.744779       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:10.744944       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:11.745096       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:11.745210       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:12.745345       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:12.745456       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:13.745616       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:13.745748       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:14.745937       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:14.746082       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:15.746229       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:15.746359       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:16.746530       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:16.746613       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:17.746779       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:17.746906       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:18.747079       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:18.747216       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:19.747379       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:19.747562       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0814 09:43:20.747702       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0814 09:43:20.747842       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [c68b5e7c89b1df7dce9e80105351c431cd50fdfcfdb294c7d8282b6b80abb010] <==
	* I0814 09:42:25.004929       1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
	I0814 09:42:25.009775       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"c5d9fc4c-fce3-11eb-977c-0242f298e734", APIVersion:"apps/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-wbkxs
	I0814 09:42:25.083106       1 controller_utils.go:1034] Caches are synced for deployment controller
	I0814 09:42:25.087411       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"e8aafb81-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-5b494cc544 to 1
	I0814 09:42:25.087699       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"e8abe90a-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	I0814 09:42:25.089617       1 controller_utils.go:1034] Caches are synced for disruption controller
	I0814 09:42:25.089781       1 disruption.go:294] Sending events to api server.
	I0814 09:42:25.093541       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"ee4ec96c-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-7z4lv
	I0814 09:42:25.093974       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"ee4ec420-fce3-11eb-8319-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-q7m9p
	I0814 09:42:25.119029       1 controller_utils.go:1034] Caches are synced for ReplicationController controller
	I0814 09:42:25.187352       1 controller_utils.go:1034] Caches are synced for expand controller
	I0814 09:42:25.191437       1 controller_utils.go:1034] Caches are synced for PV protection controller
	I0814 09:42:25.213162       1 controller_utils.go:1034] Caches are synced for attach detach controller
	I0814 09:42:25.237395       1 controller_utils.go:1034] Caches are synced for persistent volume controller
	I0814 09:42:25.299360       1 controller_utils.go:1034] Caches are synced for HPA controller
	I0814 09:42:25.544267       1 controller_utils.go:1034] Caches are synced for resource quota controller
	I0814 09:42:25.579594       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	I0814 09:42:25.579614       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0814 09:42:26.202521       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0814 09:42:26.202723       1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
	W0814 09:42:27.302021       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0814 09:42:27.302313       1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
	I0814 09:42:27.402535       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	E0814 09:42:56.454784       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:42:59.404021       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [10ba05c467b3a6a9796795103a41dd5e21dc90412fd68f5c028d3d843829e5ab] <==
	* W0814 09:42:09.644431       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0814 09:42:09.717284       1 server_others.go:148] Using iptables Proxier.
	I0814 09:42:09.717450       1 server_others.go:178] Tearing down inactive rules.
	I0814 09:42:10.337127       1 server.go:555] Version: v1.14.0
	I0814 09:42:10.341304       1 config.go:202] Starting service config controller
	I0814 09:42:10.341320       1 config.go:102] Starting endpoints config controller
	I0814 09:42:10.341336       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0814 09:42:10.341337       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0814 09:42:10.441492       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0814 09:42:10.441493       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-proxy [66a9ce36611591121dee71fec40dd87cd51874561a51962cc10ca295869204e7] <==
	* W0814 09:39:49.324608       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0814 09:39:49.333214       1 server_others.go:148] Using iptables Proxier.
	I0814 09:39:49.333413       1 server_others.go:178] Tearing down inactive rules.
	I0814 09:39:49.909071       1 server.go:555] Version: v1.14.0
	I0814 09:39:49.914544       1 config.go:102] Starting endpoints config controller
	I0814 09:39:49.914532       1 config.go:202] Starting service config controller
	I0814 09:39:49.914916       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0814 09:39:49.914962       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0814 09:39:50.015172       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0814 09:39:50.015172       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [18664f66f1c4ce5d0d83965692193fa2e26ecb7183b069c43f6ba3adb159ed88] <==
	* I0814 09:42:04.413940       1 serving.go:319] Generated self-signed cert in-memory
	W0814 09:42:04.924362       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	W0814 09:42:04.924389       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	W0814 09:42:04.924404       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	I0814 09:42:04.927303       1 server.go:142] Version: v1.14.0
	I0814 09:42:04.927795       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0814 09:42:04.933177       1 authorization.go:47] Authorization is disabled
	W0814 09:42:04.933606       1 authentication.go:55] Authentication is disabled
	I0814 09:42:04.933629       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0814 09:42:04.934320       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	I0814 09:42:09.501703       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0814 09:42:09.601910       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kube-scheduler [495e84cfb1834bce1069b2626727d804342cc869f3d557981840648e736172d4] <==
	* W0814 09:39:26.245994       1 authentication.go:55] Authentication is disabled
	I0814 09:39:26.246002       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0814 09:39:26.246418       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0814 09:39:28.905631       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:39:28.913339       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:39:28.913406       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:39:28.914993       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:39:28.914994       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:39:28.915082       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:39:28.915167       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:39:28.916857       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:39:28.916914       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:39:28.917041       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:39:29.906735       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:39:29.914403       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:39:29.915566       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:39:29.916721       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:39:29.917777       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:39:29.918943       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:39:29.919958       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:39:29.921046       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:39:29.922215       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:39:29.923309       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0814 09:39:31.802103       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0814 09:39:31.902248       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:41:39 UTC, end at Sat 2021-08-14 09:43:21 UTC. --
	Aug 14 09:42:32 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:32.715912     687 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457405     687 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457459     687 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457524     687 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:42:37 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:37.457567     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 14 09:42:40 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:40.631594     687 pod_workers.go:190] Error syncing pod 90c2ae3f-fce3-11eb-977c-0242f298e734 ("coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"
	Aug 14 09:42:40 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:40.633340     687 pod_workers.go:190] Error syncing pod 91a7567d-fce3-11eb-977c-0242f298e734 ("storage-provisioner_kube-system(91a7567d-fce3-11eb-977c-0242f298e734)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(91a7567d-fce3-11eb-977c-0242f298e734)"
	Aug 14 09:42:42 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:42.765968     687 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Aug 14 09:42:42 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:42.766007     687 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Aug 14 09:42:43 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:43.427477     687 pod_workers.go:190] Error syncing pod 90c2ae3f-fce3-11eb-977c-0242f298e734 ("coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-nfccv_kube-system(90c2ae3f-fce3-11eb-977c-0242f298e734)"
	Aug 14 09:42:45 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:45.646222     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:42:48 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:48.415732     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:42:51 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:51.349515     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:42:52 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:52.815112     687 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Aug 14 09:42:52 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:42:52.815155     687 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.432959     687 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.433000     687 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.433061     687 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:43:00 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:00.433095     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 14 09:43:02 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:02.415424     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:43:12 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:12.415995     687 pod_workers.go:190] Error syncing pod ee42e462-fce3-11eb-8319-0242c0a83a02 ("metrics-server-8546d8b77b-wbkxs_kube-system(ee42e462-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 kubelet[687]: E0814 09:43:16.701316     687 pod_workers.go:190] Error syncing pod ee4fb864-fce3-11eb-8319-0242c0a83a02 ("dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-7z4lv_kubernetes-dashboard(ee4fb864-fce3-11eb-8319-0242c0a83a02)"
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:43:16 old-k8s-version-20210814093902-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [627ec2ed6d60733fe2a4369d8f2ab42ab6fe6b353cb573f07a7a1a09de5d2edf] <==
	* 2021/08/14 09:42:25 Using namespace: kubernetes-dashboard
	2021/08/14 09:42:25 Using in-cluster config to connect to apiserver
	2021/08/14 09:42:25 Using secret token for csrf signing
	2021/08/14 09:42:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:42:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:42:25 Successful initial request to the apiserver, version: v1.14.0
	2021/08/14 09:42:25 Generating JWE encryption key
	2021/08/14 09:42:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:42:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:42:26 Initializing JWE encryption key from synchronized object
	2021/08/14 09:42:26 Creating in-cluster Sidecar client
	2021/08/14 09:42:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:42:26 Serving insecurely on HTTP port: 9090
	2021/08/14 09:42:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:42:25 Starting overwatch
	
	* 
	* ==> storage-provisioner [70a2684b60ecd151838e7b63635d07cef3322c3d9a05e21b15d8dd2851d4d586] <==
	* I0814 09:42:56.634465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:42:56.642438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:42:56.642498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:43:14.037604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:43:14.037736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210814093902-6746_3b73ab7f-bf47-4d39-ad89-9a8c3826e6d0!
	I0814 09:43:14.037739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91a5ebe0-fce3-11eb-977c-0242f298e734", APIVersion:"v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210814093902-6746_3b73ab7f-bf47-4d39-ad89-9a8c3826e6d0 became leader
	I0814 09:43:14.137941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210814093902-6746_3b73ab7f-bf47-4d39-ad89-9a8c3826e6d0!
	
	* 
	* ==> storage-provisioner [82bbbe4ef766ce7a77beb6a35c1a1d7d974312fb0d790b588286c07ecfe223c1] <==
	* I0814 09:42:09.742564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0814 09:42:39.744422       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746: exit status 2 (309.223991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-wbkxs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 describe pod metrics-server-8546d8b77b-wbkxs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210814093902-6746 describe pod metrics-server-8546d8b77b-wbkxs: exit status 1 (64.358231ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-wbkxs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210814093902-6746 describe pod metrics-server-8546d8b77b-wbkxs: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210814094325-6746 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:188: (dbg) Non-zero exit: kubectl --context embed-certs-20210814094325-6746 describe deploy/metrics-server -n kube-system: exit status 1 (71.44653ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20210814094325-6746 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:194: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210814094325-6746
helpers_test.go:236: (dbg) docker inspect embed-certs-20210814094325-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576",
	        "Created": "2021-08-14T09:43:27.289846985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:43:27.72848935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hosts",
	        "LogPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576-json.log",
	        "Name": "/embed-certs-20210814094325-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210814094325-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210814094325-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210814094325-6746",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210814094325-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210814094325-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1838fbb2b81cf776d660ca54ec98eb0448d2cb990bc8a94dfb9e315857940de2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32942"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32939"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32941"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32940"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1838fbb2b81c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210814094325-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d2385af2cb05"
	                    ],
	                    "NetworkID": "dbc6f9acad495850f4b0b885d051bfbd2cce05a9032571d93062419b0fbb36d2",
	                    "EndpointID": "ee2ce64cb2ef784e85c7b119777ee5cdc4be8f92b6755bd2a73ce23c46784344",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210814094325-6746 logs -n 25
helpers_test.go:253: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | cert-options-20210814093815-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:15 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | cert-options-20210814093815-6746                  |                                     |         |         |                               |                               |
	|         | --memory=2048                                     |                                     |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                     |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                     |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                     |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                     |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	| -p      | cert-options-20210814093815-6746                  | cert-options-20210814093815-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:38:59 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                     |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                     |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210814093815-6746    | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:38:59 UTC | Sat, 14 Aug 2021 09:39:02 UTC |
	|         | cert-options-20210814093815-6746                  |                                     |         |         |                               |                               |
	| unpause | -p pause-20210814093545-6746                      | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:48 UTC | Sat, 14 Aug 2021 09:40:48 UTC |
	|         | --alsologtostderr -v=5                            |                                     |         |         |                               |                               |
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:40:54 UTC | Sat, 14 Aug 2021 09:41:01 UTC |
	|         | -n 25                                             |                                     |         |         |                               |                               |
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:02 UTC | Sat, 14 Aug 2021 09:41:03 UTC |
	|         | -n 25                                             |                                     |         |         |                               |                               |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:03 UTC | Sat, 14 Aug 2021 09:41:06 UTC |
	|         | --alsologtostderr -v=5                            |                                     |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:39:02 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                     |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                     |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                     |         |         |                               |                               |
	|         | --keep-context=false                              |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                     |         |         |                               |                               |
	| profile | list --output json                                | minikube                            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:06 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:07 UTC | Sat, 14 Aug 2021 09:41:08 UTC |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:16 UTC | Sat, 14 Aug 2021 09:41:17 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:17 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                     |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                     |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                     |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                     |         |         |                               |                               |
	|         | --keep-context=false                              |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                     |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                     |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:18 UTC | Sat, 14 Aug 2021 09:43:19 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:20 UTC | Sat, 14 Aug 2021 09:43:21 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:21 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                     |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:43:25
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:43:25.821328  202919 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:43:25.821395  202919 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:25.821400  202919 out.go:311] Setting ErrFile to fd 2...
	I0814 09:43:25.821404  202919 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:25.821494  202919 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:43:25.821729  202919 out.go:305] Setting JSON to false
	I0814 09:43:25.856304  202919 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5168,"bootTime":1628929038,"procs":241,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:43:25.856381  202919 start.go:121] virtualization: kvm guest
	I0814 09:43:25.858901  202919 out.go:177] * [embed-certs-20210814094325-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:43:25.860371  202919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:43:25.859021  202919 notify.go:169] Checking for updates...
	I0814 09:43:25.861818  202919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:43:25.863216  202919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:43:25.864640  202919 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:43:25.865076  202919 config.go:177] Loaded profile config "no-preload-20210814094108-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:43:25.865155  202919 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:43:25.865237  202919 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:43:25.865270  202919 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:43:25.910472  202919 docker.go:132] docker version: linux-19.03.15
	I0814 09:43:25.910538  202919 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:43:25.984911  202919 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:43:25.943363096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:43:25.985014  202919 docker.go:244] overlay module found
	I0814 09:43:25.987032  202919 out.go:177] * Using the docker driver based on user configuration
	I0814 09:43:25.987055  202919 start.go:278] selected driver: docker
	I0814 09:43:25.987061  202919 start.go:751] validating driver "docker" against <nil>
	I0814 09:43:25.987080  202919 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:43:25.987137  202919 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:43:25.987154  202919 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:43:25.988481  202919 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:43:25.989302  202919 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:43:26.064847  202919 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:43:26.023730679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:43:26.064941  202919 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:43:26.065094  202919 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:43:26.065116  202919 cni.go:93] Creating CNI manager for ""
	I0814 09:43:26.065122  202919 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:43:26.065129  202919 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:43:26.065136  202919 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:43:26.065141  202919 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:43:26.065151  202919 start_flags.go:277] config:
	{Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:43:26.067123  202919 out.go:177] * Starting control plane node embed-certs-20210814094325-6746 in cluster embed-certs-20210814094325-6746
	I0814 09:43:26.067156  202919 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:43:26.068511  202919 out.go:177] * Pulling base image ...
	I0814 09:43:26.068545  202919 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:43:26.068570  202919 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:43:26.068582  202919 cache.go:56] Caching tarball of preloaded images
	I0814 09:43:26.068640  202919 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:43:26.068702  202919 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:43:26.068721  202919 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:43:26.068844  202919 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/config.json ...
	I0814 09:43:26.068865  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/config.json: {Name:mkd49eb4ac94d96699037cc8c07c2ef62b590503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:26.143571  202919 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:43:26.143608  202919 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:43:26.143624  202919 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:43:26.143677  202919 start.go:313] acquiring machines lock for embed-certs-20210814094325-6746: {Name:mk9d63dfbf0330e30e75ccffedf22e0c93e8bd0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:43:26.143820  202919 start.go:317] acquired machines lock for "embed-certs-20210814094325-6746" in 124.79µs
	I0814 09:43:26.143842  202919 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:43:26.143922  202919 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:43:27.744915  198227 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:43:27.816371  198227 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:43:27.816435  198227 ssh_runner.go:149] Run: containerd --version
	I0814 09:43:27.842226  198227 ssh_runner.go:149] Run: containerd --version
	I0814 09:43:27.871578  198227 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0814 09:43:27.871658  198227 cli_runner.go:115] Run: docker network inspect no-preload-20210814094108-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:43:27.927012  198227 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0814 09:43:27.930469  198227 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:43:27.942111  198227 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:43:27.942159  198227 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:43:27.970495  198227 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:43:27.970524  198227 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:43:27.970578  198227 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:43:28.009559  198227 cni.go:93] Creating CNI manager for ""
	I0814 09:43:28.009581  198227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:43:28.009595  198227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:43:28.009612  198227 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210814094108-6746 NodeName:no-preload-20210814094108-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs Clie
ntCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:43:28.009780  198227 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210814094108-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:43:28.009912  198227 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210814094108-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210814094108-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:43:28.009980  198227 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0814 09:43:28.017918  198227 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:43:28.018006  198227 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:43:28.025803  198227 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (580 bytes)
	I0814 09:43:28.040027  198227 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0814 09:43:28.053911  198227 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I0814 09:43:28.070914  198227 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:43:28.074344  198227 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:43:28.085424  198227 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746 for IP: 192.168.49.2
	I0814 09:43:28.085474  198227 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:43:28.085494  198227 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:43:28.085552  198227 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.key
	I0814 09:43:28.085576  198227 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/apiserver.key.dd3b5fb2
	I0814 09:43:28.085596  198227 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/proxy-client.key
	I0814 09:43:28.085707  198227 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:43:28.085762  198227 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:43:28.085772  198227 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:43:28.085807  198227 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:43:28.085856  198227 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:43:28.085883  198227 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:43:28.085933  198227 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:43:28.087289  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:43:28.109382  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:43:28.129608  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:43:28.149486  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:43:28.168908  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:43:28.188542  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:43:28.208719  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:43:28.228740  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:43:28.251656  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:43:28.272189  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:43:28.293349  198227 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:43:28.311842  198227 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:43:28.325542  198227 ssh_runner.go:149] Run: openssl version
	I0814 09:43:28.331033  198227 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:43:28.338537  198227 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:43:28.341556  198227 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:43:28.341605  198227 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:43:28.346660  198227 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:43:28.352998  198227 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:43:28.360270  198227 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:43:28.363158  198227 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:43:28.363202  198227 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:43:28.368267  198227 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:43:28.375173  198227 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:43:28.382337  198227 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:43:28.385486  198227 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:43:28.385531  198227 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:43:28.390252  198227 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:43:28.396471  198227 kubeadm.go:390] StartCluster: {Name:no-preload-20210814094108-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210814094108-6746 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:43:28.396577  198227 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:43:28.396631  198227 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:43:28.423998  198227 cri.go:76] found id: "71f6c079a8721279f515a24a181aed3da56c2b3b604b8d93a6bd246c2478169b"
	I0814 09:43:28.424026  198227 cri.go:76] found id: "daaabb38fcf6f9c53fa8ad810b696148acffd84e2c317eab97c98da1b050f945"
	I0814 09:43:28.424044  198227 cri.go:76] found id: "442d4e9a1912896949de99927aadbf21b1ea18a6597a543b1032b81d90584452"
	I0814 09:43:28.424052  198227 cri.go:76] found id: "abc8c3681c77c34280e1dc10691455be31d47b24fabefdff2861c58658388b2f"
	I0814 09:43:28.424058  198227 cri.go:76] found id: "bb20e41f688a67aa46e7c3db3c26606f430a4df485d47a00914b9a74b39e00c2"
	I0814 09:43:28.424065  198227 cri.go:76] found id: "36d80f12817982189102684839c5bac0ba1c2e254f80dca491cd65e6a36894de"
	I0814 09:43:28.424077  198227 cri.go:76] found id: "7f392928db2b67de3429df6a916c926eead4a3f778321dba01e1e894b42b1a6f"
	I0814 09:43:28.424082  198227 cri.go:76] found id: "478eb41217e80511168738cd64a30c998889b220004e11c59a9df9524d993aea"
	I0814 09:43:28.424088  198227 cri.go:76] found id: ""
	I0814 09:43:28.424131  198227 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:43:28.448187  198227 cri.go:103] JSON = null
	W0814 09:43:28.448245  198227 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0814 09:43:28.448297  198227 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:43:28.456156  198227 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0814 09:43:28.456183  198227 kubeadm.go:600] restartCluster start
	I0814 09:43:28.456232  198227 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0814 09:43:28.462987  198227 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:28.463832  198227 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210814094108-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:43:28.463960  198227 kubeconfig.go:128] "no-preload-20210814094108-6746" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig - will repair!
	I0814 09:43:28.464432  198227 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:28.467319  198227 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:43:28.473529  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:28.473591  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:28.485941  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:28.686353  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:28.686438  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:28.699695  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:28.886966  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:28.887036  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:28.900939  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:29.086089  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:29.086157  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:29.099038  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:29.286182  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:29.286254  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:29.301067  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:29.486359  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:29.486435  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:29.500132  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:29.686428  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:29.686517  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:29.702193  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:29.886501  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:29.886580  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:29.900040  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:30.086331  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:30.086400  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:30.099875  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:26.146321  202919 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0814 09:43:26.146623  202919 start.go:160] libmachine.API.Create for "embed-certs-20210814094325-6746" (driver="docker")
	I0814 09:43:26.146661  202919 client.go:168] LocalClient.Create starting
	I0814 09:43:26.146732  202919 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:43:26.146808  202919 main.go:130] libmachine: Decoding PEM data...
	I0814 09:43:26.146844  202919 main.go:130] libmachine: Parsing certificate...
	I0814 09:43:26.146954  202919 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:43:26.146978  202919 main.go:130] libmachine: Decoding PEM data...
	I0814 09:43:26.146993  202919 main.go:130] libmachine: Parsing certificate...
	I0814 09:43:26.147429  202919 cli_runner.go:115] Run: docker network inspect embed-certs-20210814094325-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:43:26.183347  202919 cli_runner.go:162] docker network inspect embed-certs-20210814094325-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:43:26.183411  202919 network_create.go:255] running [docker network inspect embed-certs-20210814094325-6746] to gather additional debugging logs...
	I0814 09:43:26.183442  202919 cli_runner.go:115] Run: docker network inspect embed-certs-20210814094325-6746
	W0814 09:43:26.217886  202919 cli_runner.go:162] docker network inspect embed-certs-20210814094325-6746 returned with exit code 1
	I0814 09:43:26.217909  202919 network_create.go:258] error running [docker network inspect embed-certs-20210814094325-6746]: docker network inspect embed-certs-20210814094325-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210814094325-6746
	I0814 09:43:26.217920  202919 network_create.go:260] output of [docker network inspect embed-certs-20210814094325-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210814094325-6746
	
	** /stderr **
	I0814 09:43:26.217967  202919 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:43:26.253470  202919 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b3ba1e9c1cb0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:1e:7b:d7:7e}}
	I0814 09:43:26.255937  202919 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0001ec3c0] misses:0}
	I0814 09:43:26.256207  202919 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:43:26.256226  202919 network_create.go:106] attempt to create docker network embed-certs-20210814094325-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:43:26.256272  202919 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210814094325-6746
	I0814 09:43:26.323704  202919 network_create.go:90] docker network embed-certs-20210814094325-6746 192.168.58.0/24 created
	I0814 09:43:26.323740  202919 kic.go:106] calculated static IP "192.168.58.2" for the "embed-certs-20210814094325-6746" container
	I0814 09:43:26.323791  202919 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:43:26.362354  202919 cli_runner.go:115] Run: docker volume create embed-certs-20210814094325-6746 --label name.minikube.sigs.k8s.io=embed-certs-20210814094325-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:43:26.399685  202919 oci.go:102] Successfully created a docker volume embed-certs-20210814094325-6746
	I0814 09:43:26.399758  202919 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210814094325-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210814094325-6746 --entrypoint /usr/bin/test -v embed-certs-20210814094325-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:43:27.168107  202919 oci.go:106] Successfully prepared a docker volume embed-certs-20210814094325-6746
	W0814 09:43:27.168163  202919 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:43:27.168171  202919 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:43:27.168186  202919 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:43:27.168217  202919 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:43:27.168219  202919 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:43:27.168280  202919 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210814094325-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:43:27.246516  202919 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210814094325-6746 --name embed-certs-20210814094325-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210814094325-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210814094325-6746 --network embed-certs-20210814094325-6746 --ip 192.168.58.2 --volume embed-certs-20210814094325-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:43:27.737492  202919 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Running}}
	I0814 09:43:27.780048  202919 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:43:27.828503  202919 cli_runner.go:115] Run: docker exec embed-certs-20210814094325-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:43:27.983354  202919 oci.go:278] the created container "embed-certs-20210814094325-6746" has a running status.
	I0814 09:43:27.983394  202919 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa...
	I0814 09:43:28.379457  202919 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:43:28.751242  202919 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:43:28.792031  202919 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:43:28.792053  202919 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210814094325-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:43:30.286348  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:30.286425  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:30.299672  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:30.486888  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:30.486982  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:30.499990  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:30.686201  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:30.686280  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:30.699874  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:30.886081  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:30.886155  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:30.902368  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:31.086621  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:31.086702  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:31.100808  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:31.286931  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:31.287020  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:31.300877  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:31.486129  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:31.486185  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:31.498413  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:31.498429  198227 api_server.go:164] Checking apiserver status ...
	I0814 09:43:31.498455  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:43:31.509791  198227 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:31.509817  198227 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0814 09:43:31.509827  198227 kubeadm.go:1032] stopping kube-system containers ...
	I0814 09:43:31.509839  198227 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:43:31.509884  198227 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:43:31.533381  198227 cri.go:76] found id: "71f6c079a8721279f515a24a181aed3da56c2b3b604b8d93a6bd246c2478169b"
	I0814 09:43:31.533405  198227 cri.go:76] found id: "daaabb38fcf6f9c53fa8ad810b696148acffd84e2c317eab97c98da1b050f945"
	I0814 09:43:31.533410  198227 cri.go:76] found id: "442d4e9a1912896949de99927aadbf21b1ea18a6597a543b1032b81d90584452"
	I0814 09:43:31.533414  198227 cri.go:76] found id: "abc8c3681c77c34280e1dc10691455be31d47b24fabefdff2861c58658388b2f"
	I0814 09:43:31.533418  198227 cri.go:76] found id: "bb20e41f688a67aa46e7c3db3c26606f430a4df485d47a00914b9a74b39e00c2"
	I0814 09:43:31.533422  198227 cri.go:76] found id: "36d80f12817982189102684839c5bac0ba1c2e254f80dca491cd65e6a36894de"
	I0814 09:43:31.533425  198227 cri.go:76] found id: "7f392928db2b67de3429df6a916c926eead4a3f778321dba01e1e894b42b1a6f"
	I0814 09:43:31.533429  198227 cri.go:76] found id: "478eb41217e80511168738cd64a30c998889b220004e11c59a9df9524d993aea"
	I0814 09:43:31.533432  198227 cri.go:76] found id: ""
	I0814 09:43:31.533436  198227 cri.go:221] Stopping containers: [71f6c079a8721279f515a24a181aed3da56c2b3b604b8d93a6bd246c2478169b daaabb38fcf6f9c53fa8ad810b696148acffd84e2c317eab97c98da1b050f945 442d4e9a1912896949de99927aadbf21b1ea18a6597a543b1032b81d90584452 abc8c3681c77c34280e1dc10691455be31d47b24fabefdff2861c58658388b2f bb20e41f688a67aa46e7c3db3c26606f430a4df485d47a00914b9a74b39e00c2 36d80f12817982189102684839c5bac0ba1c2e254f80dca491cd65e6a36894de 7f392928db2b67de3429df6a916c926eead4a3f778321dba01e1e894b42b1a6f 478eb41217e80511168738cd64a30c998889b220004e11c59a9df9524d993aea]
	I0814 09:43:31.533479  198227 ssh_runner.go:149] Run: which crictl
	I0814 09:43:31.536304  198227 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 71f6c079a8721279f515a24a181aed3da56c2b3b604b8d93a6bd246c2478169b daaabb38fcf6f9c53fa8ad810b696148acffd84e2c317eab97c98da1b050f945 442d4e9a1912896949de99927aadbf21b1ea18a6597a543b1032b81d90584452 abc8c3681c77c34280e1dc10691455be31d47b24fabefdff2861c58658388b2f bb20e41f688a67aa46e7c3db3c26606f430a4df485d47a00914b9a74b39e00c2 36d80f12817982189102684839c5bac0ba1c2e254f80dca491cd65e6a36894de 7f392928db2b67de3429df6a916c926eead4a3f778321dba01e1e894b42b1a6f 478eb41217e80511168738cd64a30c998889b220004e11c59a9df9524d993aea
	I0814 09:43:31.560269  198227 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0814 09:43:31.569353  198227 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:43:31.575434  198227 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 14 09:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 14 09:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 14 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 14 09:41 /etc/kubernetes/scheduler.conf
	
	I0814 09:43:31.575471  198227 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 09:43:31.582318  198227 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 09:43:31.588151  198227 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 09:43:31.594244  198227 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:31.594288  198227 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:43:31.599957  198227 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 09:43:31.606008  198227 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:43:31.606045  198227 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:43:31.611557  198227 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:43:31.618010  198227 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0814 09:43:31.618025  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:43:31.658721  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:43:32.359627  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:43:32.479261  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:43:32.528728  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:43:32.606899  198227 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:43:32.606955  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:33.120016  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:33.619422  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:34.120171  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:34.619625  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:35.119594  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:31.305442  202919 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210814094325-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.13711842s)
	I0814 09:43:31.305470  202919 kic.go:188] duration metric: took 4.137248 seconds to extract preloaded images to volume
	I0814 09:43:31.305555  202919 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:43:31.342821  202919 machine.go:88] provisioning docker machine ...
	I0814 09:43:31.342857  202919 ubuntu.go:169] provisioning hostname "embed-certs-20210814094325-6746"
	I0814 09:43:31.342952  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:43:31.379682  202919 main.go:130] libmachine: Using SSH client type: native
	I0814 09:43:31.379841  202919 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0814 09:43:31.379857  202919 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210814094325-6746 && echo "embed-certs-20210814094325-6746" | sudo tee /etc/hostname
	I0814 09:43:31.511499  202919 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210814094325-6746
	
	I0814 09:43:31.511585  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:43:31.552908  202919 main.go:130] libmachine: Using SSH client type: native
	I0814 09:43:31.553084  202919 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0814 09:43:31.553113  202919 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210814094325-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210814094325-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210814094325-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:43:31.676047  202919 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:43:31.676077  202919 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:43:31.676113  202919 ubuntu.go:177] setting up certificates
	I0814 09:43:31.676129  202919 provision.go:83] configureAuth start
	I0814 09:43:31.676174  202919 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:43:31.714476  202919 provision.go:138] copyHostCerts
	I0814 09:43:31.714528  202919 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:43:31.714535  202919 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:43:31.714589  202919 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:43:31.714671  202919 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:43:31.714681  202919 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:43:31.714697  202919 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:43:31.714770  202919 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:43:31.714781  202919 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:43:31.714796  202919 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:43:31.714831  202919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210814094325-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210814094325-6746]
	I0814 09:43:31.810563  202919 provision.go:172] copyRemoteCerts
	I0814 09:43:31.810613  202919 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:43:31.810647  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:43:31.850161  202919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:43:31.939313  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:43:31.954910  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0814 09:43:31.969820  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:43:31.984952  202919 provision.go:86] duration metric: configureAuth took 308.809819ms
	I0814 09:43:31.984971  202919 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:43:31.985121  202919 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:43:31.985133  202919 machine.go:91] provisioned docker machine in 642.290799ms
	I0814 09:43:31.985140  202919 client.go:171] LocalClient.Create took 5.838474203s
	I0814 09:43:31.985159  202919 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20210814094325-6746" took 5.838535836s
	I0814 09:43:31.985170  202919 start.go:267] post-start starting for "embed-certs-20210814094325-6746" (driver="docker")
	I0814 09:43:31.985180  202919 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:43:31.985224  202919 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:43:31.985264  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:43:32.023809  202919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:43:32.111589  202919 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:43:32.114165  202919 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:43:32.114187  202919 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:43:32.114198  202919 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:43:32.114203  202919 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:43:32.114211  202919 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:43:32.114249  202919 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:43:32.114322  202919 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:43:32.114429  202919 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:43:32.120538  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:43:32.138516  202919 start.go:270] post-start completed in 153.333175ms
	I0814 09:43:32.138795  202919 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:43:32.176654  202919 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/config.json ...
	I0814 09:43:32.176866  202919 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:43:32.176904  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:43:32.217800  202919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:43:32.304460  202919 start.go:129] duration metric: createHost completed in 6.160528069s
	I0814 09:43:32.304483  202919 start.go:80] releasing machines lock for "embed-certs-20210814094325-6746", held for 6.160651653s
	I0814 09:43:32.304551  202919 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:43:32.342936  202919 ssh_runner.go:149] Run: systemctl --version
	I0814 09:43:32.342993  202919 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:43:32.342999  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:43:32.343055  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:43:32.386906  202919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:43:32.392874  202919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:43:32.499463  202919 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:43:32.509923  202919 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:43:32.519396  202919 docker.go:153] disabling docker service ...
	I0814 09:43:32.519448  202919 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:43:32.537942  202919 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:43:32.546860  202919 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:43:32.614268  202919 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:43:32.671421  202919 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:43:32.679358  202919 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:43:32.690403  202919 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:43:32.702244  202919 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:43:32.708203  202919 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:43:32.708245  202919 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:43:32.715003  202919 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:43:32.720657  202919 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:43:32.779859  202919 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:43:32.845686  202919 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:43:32.845751  202919 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:43:32.849856  202919 start.go:413] Will wait 60s for crictl version
	I0814 09:43:32.849912  202919 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:43:32.874139  202919 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:43:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:43:35.619451  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:36.120039  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:36.620283  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:37.120152  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:37.620416  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:38.120085  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:38.620185  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:43:38.705830  198227 api_server.go:70] duration metric: took 6.098931165s to wait for apiserver process to appear ...
	I0814 09:43:38.705861  198227 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:43:38.705873  198227 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 09:43:41.968898  198227 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 09:43:41.968926  198227 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 09:43:42.469615  198227 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 09:43:42.473930  198227 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:43:42.473952  198227 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:43:42.969070  198227 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 09:43:42.973466  198227 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:43:42.973485  198227 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:43:43.469013  198227 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 09:43:43.473381  198227 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0814 09:43:43.479016  198227 api_server.go:139] control plane version: v1.22.0-rc.0
	I0814 09:43:43.479035  198227 api_server.go:129] duration metric: took 4.773167389s to wait for apiserver health ...
	I0814 09:43:43.479080  198227 cni.go:93] Creating CNI manager for ""
	I0814 09:43:43.479093  198227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:43:43.920913  202919 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:43:44.016843  202919 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:43:44.016908  202919 ssh_runner.go:149] Run: containerd --version
	I0814 09:43:44.041726  202919 ssh_runner.go:149] Run: containerd --version
	I0814 09:43:43.481360  198227 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:43:43.481413  198227 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:43:43.484781  198227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0814 09:43:43.484816  198227 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:43:43.497068  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:43:43.748225  198227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:43:43.758787  198227 system_pods.go:59] 9 kube-system pods found
	I0814 09:43:43.758815  198227 system_pods.go:61] "coredns-78fcd69978-hqkbt" [9089a11f-e5ee-4b9d-96e1-cd37f498a734] Running
	I0814 09:43:43.758820  198227 system_pods.go:61] "etcd-no-preload-20210814094108-6746" [5905e680-a394-4232-a708-a693351c4de1] Running
	I0814 09:43:43.758824  198227 system_pods.go:61] "kindnet-d7p49" [2b20a1d3-297d-49ab-a7a0-9a088d381a4b] Running
	I0814 09:43:43.758828  198227 system_pods.go:61] "kube-apiserver-no-preload-20210814094108-6746" [9820dae6-cb19-44aa-9244-b5594ca173d3] Running
	I0814 09:43:43.758835  198227 system_pods.go:61] "kube-controller-manager-no-preload-20210814094108-6746" [27542be9-5c25-4dd0-8068-9e8e1d3ada78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 09:43:43.758839  198227 system_pods.go:61] "kube-proxy-68rn4" [4254bd45-8e5a-4d48-a1f4-d98743924379] Running
	I0814 09:43:43.758847  198227 system_pods.go:61] "kube-scheduler-no-preload-20210814094108-6746" [6ca53b7b-e504-4c1f-8b36-9c28a6c93dd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:43:43.758856  198227 system_pods.go:61] "metrics-server-7c784ccb57-lsbk6" [0508c1e5-48c5-44b4-9426-784c26079ebb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:43:43.758867  198227 system_pods.go:61] "storage-provisioner" [131ab5cd-c9a0-40be-8dce-eb285d21dd3a] Running
	I0814 09:43:43.758873  198227 system_pods.go:74] duration metric: took 10.625899ms to wait for pod list to return data ...
	I0814 09:43:43.758885  198227 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:43:43.761664  198227 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:43:43.761709  198227 node_conditions.go:123] node cpu capacity is 8
	I0814 09:43:43.761722  198227 node_conditions.go:105] duration metric: took 2.829277ms to run NodePressure ...
	I0814 09:43:43.761738  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:43:44.128338  198227 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0814 09:43:44.133187  198227 kubeadm.go:746] kubelet initialised
	I0814 09:43:44.133208  198227 kubeadm.go:747] duration metric: took 4.830543ms waiting for restarted kubelet to initialise ...
	I0814 09:43:44.133215  198227 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:43:44.138357  198227 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-hqkbt" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:44.213857  198227 pod_ready.go:92] pod "coredns-78fcd69978-hqkbt" in "kube-system" namespace has status "Ready":"True"
	I0814 09:43:44.213877  198227 pod_ready.go:81] duration metric: took 75.494406ms waiting for pod "coredns-78fcd69978-hqkbt" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:44.213890  198227 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:44.219812  198227 pod_ready.go:92] pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:43:44.219835  198227 pod_ready.go:81] duration metric: took 5.937117ms waiting for pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:44.219852  198227 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:44.225823  198227 pod_ready.go:92] pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:43:44.225839  198227 pod_ready.go:81] duration metric: took 5.978391ms waiting for pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:44.225851  198227 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:44.064624  202919 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:43:44.064694  202919 cli_runner.go:115] Run: docker network inspect embed-certs-20210814094325-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:43:44.102625  202919 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:43:44.106798  202919 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:43:44.119446  202919 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:43:44.119517  202919 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:43:44.146635  202919 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:43:44.146658  202919 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:43:44.146705  202919 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:43:44.230033  202919 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:43:44.230057  202919 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:43:44.230114  202919 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:43:44.275554  202919 cni.go:93] Creating CNI manager for ""
	I0814 09:43:44.275583  202919 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:43:44.275593  202919 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:43:44.275605  202919 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210814094325-6746 NodeName:embed-certs-20210814094325-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:43:44.275726  202919 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210814094325-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:43:44.275809  202919 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210814094325-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:43:44.275867  202919 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:43:44.322240  202919 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:43:44.322314  202919 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:43:44.330802  202919 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0814 09:43:44.346869  202919 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:43:44.361561  202919 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I0814 09:43:44.373445  202919 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:43:44.376191  202919 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:43:44.384545  202919 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746 for IP: 192.168.58.2
	I0814 09:43:44.384586  202919 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:43:44.384601  202919 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:43:44.384648  202919 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.key
	I0814 09:43:44.384663  202919 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.crt with IP's: []
	I0814 09:43:44.727313  202919 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.crt ...
	I0814 09:43:44.727345  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.crt: {Name:mk810cdb9f6a8982310d031c3ad8cdc533eaadf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:44.727505  202919 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.key ...
	I0814 09:43:44.727520  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.key: {Name:mkebcf1d298d5241e5335e57f2d5b0493130e9a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:44.727598  202919 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key.cee25041
	I0814 09:43:44.727607  202919 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:43:44.792991  202919 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt.cee25041 ...
	I0814 09:43:44.793028  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt.cee25041: {Name:mk3072763111548ae076c452305add4f5243bcdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:44.793222  202919 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key.cee25041 ...
	I0814 09:43:44.793241  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key.cee25041: {Name:mk47d959efcb9a05e4e29325a1fd2fb231da819c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:44.793327  202919 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt
	I0814 09:43:44.793381  202919 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key
	I0814 09:43:44.793429  202919 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key
	I0814 09:43:44.793436  202919 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.crt with IP's: []
	I0814 09:43:45.016660  202919 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.crt ...
	I0814 09:43:45.016692  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.crt: {Name:mk51266e533be0da515891ab080eb0706d011d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:45.016872  202919 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key ...
	I0814 09:43:45.016891  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key: {Name:mk207cfd6d6dd7d0d3827a4a6a24bd713b550c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:43:45.017090  202919 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:43:45.017138  202919 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:43:45.017150  202919 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:43:45.017185  202919 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:43:45.017222  202919 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:43:45.017256  202919 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:43:45.017309  202919 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:43:45.018214  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:43:45.101091  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:43:45.117471  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:43:45.133248  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:43:45.148756  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:43:45.163869  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:43:45.178865  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:43:45.193964  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:43:45.209417  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:43:45.224764  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:43:45.240547  202919 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:43:45.255947  202919 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:43:45.267304  202919 ssh_runner.go:149] Run: openssl version
	I0814 09:43:45.271564  202919 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:43:45.278125  202919 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:43:45.280861  202919 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:43:45.280895  202919 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:43:45.285141  202919 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:43:45.291413  202919 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:43:45.298001  202919 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:43:45.300646  202919 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:43:45.300688  202919 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:43:45.305294  202919 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:43:45.312610  202919 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:43:45.318976  202919 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:43:45.321721  202919 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:43:45.321753  202919 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:43:45.326079  202919 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:43:45.332324  202919 kubeadm.go:390] StartCluster: {Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:43:45.332395  202919 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:43:45.332432  202919 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:43:45.353763  202919 cri.go:76] found id: ""
	I0814 09:43:45.353806  202919 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:43:45.359795  202919 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:43:45.365771  202919 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:43:45.365812  202919 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:43:45.371798  202919 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:43:45.371837  202919 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:43:45.632613  202919 out.go:204]   - Generating certificates and keys ...
	I0814 09:43:46.235207  198227 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:43:48.235725  198227 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:43:47.829379  202919 out.go:204]   - Booting up control plane ...
	I0814 09:43:50.736592  198227 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:43:53.236000  198227 pod_ready.go:102] pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:43:54.738366  198227 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:43:54.738397  198227 pod_ready.go:81] duration metric: took 10.512536683s waiting for pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:54.738410  198227 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-68rn4" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:54.743022  198227 pod_ready.go:92] pod "kube-proxy-68rn4" in "kube-system" namespace has status "Ready":"True"
	I0814 09:43:54.743039  198227 pod_ready.go:81] duration metric: took 4.620737ms waiting for pod "kube-proxy-68rn4" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:54.743050  198227 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:54.746801  198227 pod_ready.go:92] pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:43:54.746817  198227 pod_ready.go:81] duration metric: took 3.758815ms waiting for pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:54.746825  198227 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace to be "Ready" ...
	I0814 09:43:56.757334  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:43:59.255597  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:01.255823  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:03.803501  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:07.648973  202919 out.go:204]   - Configuring RBAC rules ...
	I0814 09:44:08.062595  202919 cni.go:93] Creating CNI manager for ""
	I0814 09:44:08.062620  202919 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:44:07.137817  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:09.255658  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:08.064432  202919 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:44:08.064499  202919 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:44:08.067958  202919 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:44:08.067974  202919 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:44:08.080148  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:44:08.435377  202919 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:44:08.435491  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:08.435517  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=embed-certs-20210814094325-6746 minikube.k8s.io/updated_at=2021_08_14T09_44_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:08.450756  202919 ops.go:34] apiserver oom_adj: -16
	I0814 09:44:08.529715  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:09.093154  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:09.593456  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:10.092854  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:10.592843  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:11.255861  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:13.256974  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:11.092703  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:11.593096  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:12.093394  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:12.592602  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:13.093234  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:13.592910  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:14.092673  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:14.592671  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:15.093368  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:15.593381  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:15.755897  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:17.756388  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:16.092887  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:16.592624  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:17.092750  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:17.592768  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:18.093171  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:18.593315  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:19.093228  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:19.592617  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:20.092855  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:20.593438  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:21.093362  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:21.592737  202919 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:44:21.659414  202919 kubeadm.go:985] duration metric: took 13.223993096s to wait for elevateKubeSystemPrivileges.
	I0814 09:44:21.659448  202919 kubeadm.go:392] StartCluster complete in 36.327126524s
	I0814 09:44:21.659469  202919 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:44:21.659577  202919 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:44:21.661126  202919 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:44:22.175870  202919 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210814094325-6746" rescaled to 1
	I0814 09:44:22.175919  202919 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:44:22.177602  202919 out.go:177] * Verifying Kubernetes components...
	I0814 09:44:22.177658  202919 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:44:22.175987  202919 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:44:22.176006  202919 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0814 09:44:22.176171  202919 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:44:22.177776  202919 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210814094325-6746"
	I0814 09:44:22.177794  202919 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210814094325-6746"
	W0814 09:44:22.177801  202919 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:44:22.177831  202919 host.go:66] Checking if "embed-certs-20210814094325-6746" exists ...
	I0814 09:44:22.177855  202919 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210814094325-6746"
	I0814 09:44:22.177877  202919 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210814094325-6746"
	I0814 09:44:22.178238  202919 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:44:22.178407  202919 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:44:22.237416  202919 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210814094325-6746"
	W0814 09:44:22.237445  202919 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:44:22.237473  202919 host.go:66] Checking if "embed-certs-20210814094325-6746" exists ...
	I0814 09:44:22.238002  202919 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:44:22.246722  202919 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:44:22.246860  202919 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:44:22.246875  202919 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:44:22.246933  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:44:22.294229  202919 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:44:22.294481  202919 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:44:22.294501  202919 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:44:22.294544  202919 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:44:22.295558  202919 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210814094325-6746" to be "Ready" ...
	I0814 09:44:22.304416  202919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:44:22.352080  202919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:44:22.449553  202919 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:44:22.525469  202919 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:44:22.826840  202919 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0814 09:44:20.256175  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:22.302105  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:24.755468  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:23.208379  202919 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0814 09:44:23.208406  202919 addons.go:344] enableAddons completed in 1.032405341s
	I0814 09:44:24.303936  202919 node_ready.go:58] node "embed-certs-20210814094325-6746" has status "Ready":"False"
	I0814 09:44:27.256277  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:29.756113  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:26.306338  202919 node_ready.go:58] node "embed-certs-20210814094325-6746" has status "Ready":"False"
	I0814 09:44:28.849587  202919 node_ready.go:58] node "embed-certs-20210814094325-6746" has status "Ready":"False"
	I0814 09:44:32.255431  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:34.256572  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:31.304276  202919 node_ready.go:58] node "embed-certs-20210814094325-6746" has status "Ready":"False"
	I0814 09:44:33.304710  202919 node_ready.go:58] node "embed-certs-20210814094325-6746" has status "Ready":"False"
	I0814 09:44:33.804452  202919 node_ready.go:49] node "embed-certs-20210814094325-6746" has status "Ready":"True"
	I0814 09:44:33.804482  202919 node_ready.go:38] duration metric: took 11.508875618s waiting for node "embed-certs-20210814094325-6746" to be "Ready" ...
	I0814 09:44:33.804493  202919 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:44:33.815225  202919 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:36.755774  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:38.756450  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:35.823713  202919 pod_ready.go:102] pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-14 09:44:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0814 09:44:38.324472  202919 pod_ready.go:102] pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:39.825762  202919 pod_ready.go:92] pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace has status "Ready":"True"
	I0814 09:44:39.825788  202919 pod_ready.go:81] duration metric: took 6.010537164s waiting for pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.825800  202919 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.829817  202919 pod_ready.go:92] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:44:39.829837  202919 pod_ready.go:81] duration metric: took 4.025825ms waiting for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.829851  202919 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.833419  202919 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:44:39.833434  202919 pod_ready.go:81] duration metric: took 3.574984ms waiting for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.833446  202919 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.837032  202919 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:44:39.837050  202919 pod_ready.go:81] duration metric: took 3.595347ms waiting for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.837060  202919 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgvn2" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.840477  202919 pod_ready.go:92] pod "kube-proxy-mgvn2" in "kube-system" namespace has status "Ready":"True"
	I0814 09:44:39.840493  202919 pod_ready.go:81] duration metric: took 3.427713ms waiting for pod "kube-proxy-mgvn2" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:39.840503  202919 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:40.224291  202919 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:44:40.224316  202919 pod_ready.go:81] duration metric: took 383.80312ms waiting for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:44:40.224331  202919 pod_ready.go:38] duration metric: took 6.419824803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:44:40.224349  202919 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:44:40.224392  202919 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:44:40.244583  202919 api_server.go:70] duration metric: took 18.06863761s to wait for apiserver process to appear ...
	I0814 09:44:40.244611  202919 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:44:40.244622  202919 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:44:40.249112  202919 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:44:40.249981  202919 api_server.go:139] control plane version: v1.21.3
	I0814 09:44:40.250004  202919 api_server.go:129] duration metric: took 5.385334ms to wait for apiserver health ...
	I0814 09:44:40.250013  202919 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:44:40.426442  202919 system_pods.go:59] 8 kube-system pods found
	I0814 09:44:40.426470  202919 system_pods.go:61] "coredns-558bd4d5db-r9f9m" [a95b5bd5-9099-4c69-a77e-d319f3db017f] Running
	I0814 09:44:40.426477  202919 system_pods.go:61] "etcd-embed-certs-20210814094325-6746" [8a290f3e-9865-416a-a8b5-8185ce927699] Running
	I0814 09:44:40.426483  202919 system_pods.go:61] "kindnet-mmp5r" [77fdb837-eeb8-412b-a20e-ce5d6d198691] Running
	I0814 09:44:40.426489  202919 system_pods.go:61] "kube-apiserver-embed-certs-20210814094325-6746" [662d7fb3-b141-4d8e-a122-f58805e6b74a] Running
	I0814 09:44:40.426496  202919 system_pods.go:61] "kube-controller-manager-embed-certs-20210814094325-6746" [d5e10fb1-ef80-41f7-a5c6-8fb7ea20d7d4] Running
	I0814 09:44:40.426500  202919 system_pods.go:61] "kube-proxy-mgvn2" [2d2198aa-7650-47ff-81cc-7b3a13d11ac6] Running
	I0814 09:44:40.426504  202919 system_pods.go:61] "kube-scheduler-embed-certs-20210814094325-6746" [3a19f542-979a-4726-b47d-bebbfa29cfac] Running
	I0814 09:44:40.426508  202919 system_pods.go:61] "storage-provisioner" [38121728-bc1a-4972-b44b-f156a068aea0] Running
	I0814 09:44:40.426513  202919 system_pods.go:74] duration metric: took 176.495208ms to wait for pod list to return data ...
	I0814 09:44:40.426523  202919 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:44:40.624833  202919 default_sa.go:45] found service account: "default"
	I0814 09:44:40.624855  202919 default_sa.go:55] duration metric: took 198.326075ms for default service account to be created ...
	I0814 09:44:40.624863  202919 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:44:40.826574  202919 system_pods.go:86] 8 kube-system pods found
	I0814 09:44:40.826609  202919 system_pods.go:89] "coredns-558bd4d5db-r9f9m" [a95b5bd5-9099-4c69-a77e-d319f3db017f] Running
	I0814 09:44:40.826618  202919 system_pods.go:89] "etcd-embed-certs-20210814094325-6746" [8a290f3e-9865-416a-a8b5-8185ce927699] Running
	I0814 09:44:40.826624  202919 system_pods.go:89] "kindnet-mmp5r" [77fdb837-eeb8-412b-a20e-ce5d6d198691] Running
	I0814 09:44:40.826632  202919 system_pods.go:89] "kube-apiserver-embed-certs-20210814094325-6746" [662d7fb3-b141-4d8e-a122-f58805e6b74a] Running
	I0814 09:44:40.826640  202919 system_pods.go:89] "kube-controller-manager-embed-certs-20210814094325-6746" [d5e10fb1-ef80-41f7-a5c6-8fb7ea20d7d4] Running
	I0814 09:44:40.826646  202919 system_pods.go:89] "kube-proxy-mgvn2" [2d2198aa-7650-47ff-81cc-7b3a13d11ac6] Running
	I0814 09:44:40.826652  202919 system_pods.go:89] "kube-scheduler-embed-certs-20210814094325-6746" [3a19f542-979a-4726-b47d-bebbfa29cfac] Running
	I0814 09:44:40.826661  202919 system_pods.go:89] "storage-provisioner" [38121728-bc1a-4972-b44b-f156a068aea0] Running
	I0814 09:44:40.826669  202919 system_pods.go:126] duration metric: took 201.800881ms to wait for k8s-apps to be running ...
	I0814 09:44:40.826683  202919 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:44:40.826724  202919 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:44:40.836147  202919 system_svc.go:56] duration metric: took 9.457499ms WaitForService to wait for kubelet.
	I0814 09:44:40.836167  202919 kubeadm.go:547] duration metric: took 18.660226584s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:44:40.836191  202919 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:44:41.025460  202919 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:44:41.025508  202919 node_conditions.go:123] node cpu capacity is 8
	I0814 09:44:41.025525  202919 node_conditions.go:105] duration metric: took 189.329462ms to run NodePressure ...
	I0814 09:44:41.025538  202919 start.go:231] waiting for startup goroutines ...
	I0814 09:44:41.070764  202919 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:44:41.072765  202919 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210814094325-6746" cluster and "default" namespace by default
	I0814 09:44:41.255998  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:43.755813  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:45.755994  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:44:48.255976  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	6bbb5affac66d       56cc512116c8f       7 seconds ago       Running             busybox                   0                   ee0e6f883c5b9
	55e329998ae50       296a6d5035e2d       12 seconds ago      Running             coredns                   0                   118922bf86649
	e64c8214bad99       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   5977c8d993d4c
	3ce6636161093       6de166512aa22       28 seconds ago      Running             kindnet-cni               0                   c6293dce62c0c
	1df996662fcb6       adb2816ea823a       28 seconds ago      Running             kube-proxy                0                   1c07254f90623
	54d8fd8493e3a       bc2bb319a7038       56 seconds ago      Running             kube-controller-manager   0                   78f0e152a087a
	7239f0ed5afe4       6be0dc1302e30       56 seconds ago      Running             kube-scheduler            0                   646d961e5e8d4
	9ffce1306e10c       0369cf4303ffd       56 seconds ago      Running             etcd                      0                   6111c4debbfb2
	68fd3b9c805a0       3d174f00aa39e       56 seconds ago      Running             kube-apiserver            0                   2e100cdcac1f5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:43:28 UTC, end at Sat 2021-08-14 09:44:51 UTC. --
	Aug 14 09:44:38 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:38.923373147Z" level=info msg="StartContainer for \"55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5\" returns successfully"
	Aug 14 09:44:41 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:41.584893929Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:busybox,Uid:f7b65b9d-1923-4b4e-b278-8ef5cecdd2d7,Namespace:default,Attempt:0,}"
	Aug 14 09:44:41 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:41.667241442Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee0e6f883c5b9eeb8b0d66fb71d2ec9abf57f55389d06bb6215981bbf16fc291 pid=2353
	Aug 14 09:44:41 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:41.820005901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f7b65b9d-1923-4b4e-b278-8ef5cecdd2d7,Namespace:default,Attempt:0,} returns sandbox id \"ee0e6f883c5b9eeb8b0d66fb71d2ec9abf57f55389d06bb6215981bbf16fc291\""
	Aug 14 09:44:41 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:41.821602444Z" level=info msg="PullImage \"busybox:1.28.4-glibc\""
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.010290912Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox:1.28.4-glibc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.012364493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.014166898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/busybox:1.28.4-glibc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.015722352Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.016134926Z" level=info msg="PullImage \"busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.017599091Z" level=info msg="CreateContainer within sandbox \"ee0e6f883c5b9eeb8b0d66fb71d2ec9abf57f55389d06bb6215981bbf16fc291\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.085126384Z" level=info msg="CreateContainer within sandbox \"ee0e6f883c5b9eeb8b0d66fb71d2ec9abf57f55389d06bb6215981bbf16fc291\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"6bbb5affac66d29d5d1da67b3312ede2dbb1161914e3d4b222dc693d36d6b662\""
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.085532072Z" level=info msg="StartContainer for \"6bbb5affac66d29d5d1da67b3312ede2dbb1161914e3d4b222dc693d36d6b662\""
	Aug 14 09:44:43 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:43.219932537Z" level=info msg="StartContainer for \"6bbb5affac66d29d5d1da67b3312ede2dbb1161914e3d4b222dc693d36d6b662\" returns successfully"
	Aug 14 09:44:49 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:49.386853856Z" level=info msg="Exec for \"6bbb5affac66d29d5d1da67b3312ede2dbb1161914e3d4b222dc693d36d6b662\" with command [/bin/sh -c ulimit -n], tty false and stdin false"
	Aug 14 09:44:49 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:49.386926905Z" level=info msg="Exec for \"6bbb5affac66d29d5d1da67b3312ede2dbb1161914e3d4b222dc693d36d6b662\" returns URL \"http://192.168.58.2:10010/exec/VRYjXRDd\""
	Aug 14 09:44:49 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:49.441331336Z" level=info msg="Exec process \"5a9cbed443876e00108bbcd5cff3a7c21379043a48461c26c429bc87a1e01d0c\" exits with exit code 0 and error <nil>"
	Aug 14 09:44:49 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:49.441511718Z" level=info msg="Finish piping \"stdout\" of container exec \"5a9cbed443876e00108bbcd5cff3a7c21379043a48461c26c429bc87a1e01d0c\""
	Aug 14 09:44:49 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:49.441636725Z" level=info msg="Finish piping \"stderr\" of container exec \"5a9cbed443876e00108bbcd5cff3a7c21379043a48461c26c429bc87a1e01d0c\""
	Aug 14 09:44:50 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:50.611168855Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:metrics-server-7c784ccb57-57jxt,Uid:69805726-3356-4374-ba02-ddee9ab9f4d8,Namespace:kube-system,Attempt:0,}"
	Aug 14 09:44:50 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:50.710163391Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/640508d19e04a809f787245b0df2c00248e3a715ab874d2c1983b37991ab08f7 pid=2578
	Aug 14 09:44:50 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:50.879263238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-7c784ccb57-57jxt,Uid:69805726-3356-4374-ba02-ddee9ab9f4d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"640508d19e04a809f787245b0df2c00248e3a715ab874d2c1983b37991ab08f7\""
	Aug 14 09:44:50 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:50.880659982Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:44:50 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:50.932945800Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Aug 14 09:44:50 embed-certs-20210814094325-6746 containerd[458]: time="2021-08-14T09:44:50.934152160Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	
	* 
	* ==> coredns [55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210814094325-6746
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210814094325-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=embed-certs-20210814094325-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_44_08_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:43:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210814094325-6746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Aug 2021 09:44:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:44:43 +0000   Sat, 14 Aug 2021 09:43:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:44:43 +0000   Sat, 14 Aug 2021 09:43:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:44:43 +0000   Sat, 14 Aug 2021 09:43:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:44:43 +0000   Sat, 14 Aug 2021 09:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20210814094325-6746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                8b12fde3-d85f-4477-8bb2-011e8d6b01bd
	  Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 coredns-558bd4d5db-r9f9m                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     29s
	  kube-system                 etcd-embed-certs-20210814094325-6746                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-mmp5r                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      30s
	  kube-system                 kube-apiserver-embed-certs-20210814094325-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-controller-manager-embed-certs-20210814094325-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-proxy-mgvn2                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-embed-certs-20210814094325-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 metrics-server-7c784ccb57-57jxt                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)  kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 28s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                18s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000001] ll header: 00000000: 02 42 85 2c 0f c0 02 42 c0 a8 3a 02 08 00        .B.,...B..:...
	[  +0.020470] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth1aaa4059
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 6c 08 49 4a d1 08 06        ......2l.IJ...
	[  +0.000256] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev veth7067e1ac
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e 8d d5 44 9e 30 08 06        .........D.0..
	[ +11.959520] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfa4c84cf
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 06 55 0e 07 67 26 08 06        .......U..g&..
	[  +3.495552] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethc578d1e4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 06 df f9 78 a3 27 08 06        .........x.'..
	[  +8.611512] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth2cdba4ed
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 1c e7 62 8b 4d 08 06        .........b.M..
	[Aug14 09:43] cgroup: cgroup2: unknown option "nsdelegate"
	[ +15.852848] cgroup: cgroup2: unknown option "nsdelegate"
	[ +15.630654] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth1f2da06b
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 6e 5e 3c e6 4a 77 08 06        ......n^<.Jw..
	[  +0.763956] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethd6d633a7
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 4d cd 7c 56 c8 08 06        ......zM.|V...
	[  +0.000604] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth2ae90e92
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 32 ea 4c bc 4a cd 08 06        ......2.L.J...
	[Aug14 09:44] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth2fa54ff1
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 03 96 b3 24 77 08 06        ..........$w..
	[  +3.132049] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vetha932ae8c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 22 25 62 b8 5c 08 06        ......2"%!b(MISSING).\..
	[  +9.035219] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethb11afe2d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff a2 77 e5 b8 ec 8e 08 06        .......w......
	
	* 
	* ==> etcd [9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38] <==
	* 2021-08-14 09:43:54.902291 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-14 09:43:54.903215 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-14 09:43:54.903269 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-14 09:43:54.903335 I | etcdserver: published {Name:embed-certs-20210814094325-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:43:54.903361 I | embed: ready to serve client requests
	2021-08-14 09:43:54.904771 I | embed: ready to serve client requests
	2021-08-14 09:43:54.905863 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:43:54.906715 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-14 09:44:02.584596 W | wal: sync duration of 1.977561586s, expected less than 1s
	2021-08-14 09:44:03.228710 W | etcdserver: read-only range request "key:\"/registry/minions/embed-certs-20210814094325-6746\" " with result "range_response_count:1 size:3775" took too long (2.363978902s) to execute
	2021-08-14 09:44:03.228818 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (2.52347968s) to execute
	2021-08-14 09:44:03.228863 W | etcdserver: request "header:<ID:3238505139243470681 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-p2jfn\" mod_revision:0 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-p2jfn\" value_size:868 >> failure:<>>" with result "size:16" took too long (129.85559ms) to execute
	2021-08-14 09:44:03.228928 W | etcdserver: read-only range request "key:\"/registry/events/default/embed-certs-20210814094325-6746.169b230e1701e12b\" " with result "range_response_count:1 size:763" took too long (2.422605253s) to execute
	2021-08-14 09:44:05.705372 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999909055s) to execute
	WARNING: 2021/08/14 09:44:05 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-14 09:44:07.129330 W | wal: sync duration of 3.428304666s, expected less than 1s
	2021-08-14 09:44:07.130234 W | etcdserver: request "header:<ID:3238505139243470682 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-20210814094325-6746.169b230e1701e12b\" mod_revision:221 > success:<request_put:<key:\"/registry/events/default/embed-certs-20210814094325-6746.169b230e1701e12b\" value_size:657 lease:3238505139243470667 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-20210814094325-6746.169b230e1701e12b\" > >>" with result "size:16" took too long (3.428973526s) to execute
	2021-08-14 09:44:07.130411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:44:07.131597 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/csr-p2jfn\" " with result "range_response_count:1 size:937" took too long (3.899736465s) to execute
	2021-08-14 09:44:07.132106 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20210814094325-6746\" " with result "range_response_count:1 size:6155" took too long (3.430460253s) to execute
	2021-08-14 09:44:07.132957 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (427.293532ms) to execute
	2021-08-14 09:44:15.527821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:44:21.373581 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:44:31.374708 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:44:41.373691 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  09:44:51 up  1:27,  0 users,  load average: 2.50, 2.65, 2.07
	Linux embed-certs-20210814094325-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3] <==
	* Trace[368172954]: ---"Listing from storage done" 3903ms (09:44:00.134)
	Trace[368172954]: [3.903276163s] [3.903276163s] END
	I0814 09:44:07.135122       1 trace.go:205] Trace[46787305]: "GuaranteedUpdate etcd3" type:*core.Event (14-Aug-2021 09:44:00.805) (total time: 6329ms):
	Trace[46787305]: ---"initial value restored" 2423ms (09:44:00.229)
	Trace[46787305]: ---"Transaction committed" 3904ms (09:44:00.135)
	Trace[46787305]: [6.329444098s] [6.329444098s] END
	I0814 09:44:07.135397       1 trace.go:205] Trace[1874618698]: "Patch" url:/api/v1/namespaces/default/events/embed-certs-20210814094325-6746.169b230e1701e12b,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:44:00.805) (total time: 6329ms):
	Trace[1874618698]: ---"About to apply patch" 2423ms (09:44:00.229)
	Trace[1874618698]: ---"Object stored in database" 3904ms (09:44:00.135)
	Trace[1874618698]: [6.329857298s] [6.329857298s] END
	I0814 09:44:07.140813       1 trace.go:205] Trace[1507555777]: "GuaranteedUpdate etcd3" type:*core.Node (14-Aug-2021 09:44:03.233) (total time: 3906ms):
	Trace[1507555777]: ---"Transaction committed" 3899ms (09:44:00.134)
	Trace[1507555777]: [3.906985907s] [3.906985907s] END
	I0814 09:44:07.141274       1 trace.go:205] Trace[1143803609]: "Patch" url:/api/v1/nodes/embed-certs-20210814094325-6746,user-agent:kubeadm/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.58.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:44:03.233) (total time: 3907ms):
	Trace[1143803609]: ---"Object stored in database" 3906ms (09:44:00.140)
	Trace[1143803609]: [3.907548744s] [3.907548744s] END
	I0814 09:44:07.700213       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 09:44:07.910712       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0814 09:44:07.936093       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 09:44:13.261559       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 09:44:21.906718       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0814 09:44:21.957357       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0814 09:44:34.801918       1 client.go:360] parsed scheme: "passthrough"
	I0814 09:44:34.801986       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0814 09:44:34.801997       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae] <==
	* I0814 09:44:21.262368       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-embed-certs-20210814094325-6746" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0814 09:44:21.262510       1 event.go:291] "Event occurred" object="kube-system/etcd-embed-certs-20210814094325-6746" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0814 09:44:21.301423       1 shared_informer.go:247] Caches are synced for deployment 
	I0814 09:44:21.305889       1 shared_informer.go:247] Caches are synced for stateful set 
	I0814 09:44:21.340189       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:44:21.354718       1 shared_informer.go:247] Caches are synced for disruption 
	I0814 09:44:21.354736       1 disruption.go:371] Sending events to api server.
	I0814 09:44:21.355800       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:44:21.774297       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:44:21.803924       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:44:21.803944       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:44:21.911530       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mgvn2"
	I0814 09:44:21.913832       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mmp5r"
	E0814 09:44:21.928645       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"d93052c1-e9f6-4333-80dc-79f22596a0e7", ResourceVersion:"283", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764531047, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0003576e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000357710)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc000756dc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001066100), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000357
740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000357770), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000756e00)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00037a960), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015034c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000022690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001113b20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001503548)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0814 09:44:21.932226       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"721bad17-434c-47a8-8db6-288400588632", ResourceVersion:"290", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764531048, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0003577a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0003577d0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000756e80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"k
indnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000357800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.F
CVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000357830), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolum
eSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000357860), EmptyDir:(*v1.Em
ptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxV
olume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000756ea0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000756f00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infD
ecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), L
ifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00037aa80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001503748), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000022700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Ho
stAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001113b70)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001503790)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0814 09:44:21.958962       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0814 09:44:21.972154       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0814 09:44:22.058875       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-6mpv7"
	I0814 09:44:22.061486       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-r9f9m"
	I0814 09:44:22.080153       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-6mpv7"
	I0814 09:44:36.256673       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0814 09:44:49.923552       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0814 09:44:49.935156       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:44:49.943329       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:44:50.003321       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-57jxt"
	
	* 
	* ==> kube-proxy [1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138] <==
	* I0814 09:44:23.129961       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:44:23.130025       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:44:23.130063       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:44:23.204402       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:44:23.204438       1 server_others.go:212] Using iptables Proxier.
	I0814 09:44:23.204453       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:44:23.204466       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:44:23.205000       1 server.go:643] Version: v1.21.3
	I0814 09:44:23.205892       1 config.go:315] Starting service config controller
	I0814 09:44:23.205917       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:44:23.205961       1 config.go:224] Starting endpoint slice config controller
	I0814 09:44:23.205965       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:44:23.209123       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:44:23.210460       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:44:23.306683       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:44:23.306709       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147] <==
	* I0814 09:43:58.815047       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:43:58.816713       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:43:58.818415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:43:58.818500       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:43:58.818567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:43:58.818643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:43:58.818684       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:43:58.818726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:43:58.818770       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:43:58.818827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:43:58.818880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:43:58.818937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:43:58.819004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:43:58.822376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:43:58.822395       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:43:59.755726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:43:59.759590       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:43:59.826279       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:43:59.831396       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:43:59.869712       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:43:59.901724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:43:59.901844       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:43:59.902559       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:43:59.932411       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0814 09:44:01.614570       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:43:28 UTC, end at Sat 2021-08-14 09:44:51 UTC. --
	Aug 14 09:44:21 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:21.917599    1270 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004181    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d2198aa-7650-47ff-81cc-7b3a13d11ac6-lib-modules\") pod \"kube-proxy-mgvn2\" (UID: \"2d2198aa-7650-47ff-81cc-7b3a13d11ac6\") "
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004282    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg2nt\" (UniqueName: \"kubernetes.io/projected/2d2198aa-7650-47ff-81cc-7b3a13d11ac6-kube-api-access-mg2nt\") pod \"kube-proxy-mgvn2\" (UID: \"2d2198aa-7650-47ff-81cc-7b3a13d11ac6\") "
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004374    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/77fdb837-eeb8-412b-a20e-ce5d6d198691-cni-cfg\") pod \"kindnet-mmp5r\" (UID: \"77fdb837-eeb8-412b-a20e-ce5d6d198691\") "
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004417    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77fdb837-eeb8-412b-a20e-ce5d6d198691-xtables-lock\") pod \"kindnet-mmp5r\" (UID: \"77fdb837-eeb8-412b-a20e-ce5d6d198691\") "
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004443    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77fdb837-eeb8-412b-a20e-ce5d6d198691-lib-modules\") pod \"kindnet-mmp5r\" (UID: \"77fdb837-eeb8-412b-a20e-ce5d6d198691\") "
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004499    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78nm2\" (UniqueName: \"kubernetes.io/projected/77fdb837-eeb8-412b-a20e-ce5d6d198691-kube-api-access-78nm2\") pod \"kindnet-mmp5r\" (UID: \"77fdb837-eeb8-412b-a20e-ce5d6d198691\") "
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004533    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d2198aa-7650-47ff-81cc-7b3a13d11ac6-xtables-lock\") pod \"kube-proxy-mgvn2\" (UID: \"2d2198aa-7650-47ff-81cc-7b3a13d11ac6\") "
	Aug 14 09:44:22 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:22.004558    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d2198aa-7650-47ff-81cc-7b3a13d11ac6-kube-proxy\") pod \"kube-proxy-mgvn2\" (UID: \"2d2198aa-7650-47ff-81cc-7b3a13d11ac6\") "
	Aug 14 09:44:23 embed-certs-20210814094325-6746 kubelet[1270]: E0814 09:44:23.405682    1270 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 14 09:44:38 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:38.148481    1270 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:44:38 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:38.150549    1270 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:44:38 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:38.194239    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpt5\" (UniqueName: \"kubernetes.io/projected/a95b5bd5-9099-4c69-a77e-d319f3db017f-kube-api-access-wlpt5\") pod \"coredns-558bd4d5db-r9f9m\" (UID: \"a95b5bd5-9099-4c69-a77e-d319f3db017f\") "
	Aug 14 09:44:38 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:38.194281    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/38121728-bc1a-4972-b44b-f156a068aea0-tmp\") pod \"storage-provisioner\" (UID: \"38121728-bc1a-4972-b44b-f156a068aea0\") "
	Aug 14 09:44:38 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:38.194331    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kn7f\" (UniqueName: \"kubernetes.io/projected/38121728-bc1a-4972-b44b-f156a068aea0-kube-api-access-9kn7f\") pod \"storage-provisioner\" (UID: \"38121728-bc1a-4972-b44b-f156a068aea0\") "
	Aug 14 09:44:38 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:38.194419    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95b5bd5-9099-4c69-a77e-d319f3db017f-config-volume\") pod \"coredns-558bd4d5db-r9f9m\" (UID: \"a95b5bd5-9099-4c69-a77e-d319f3db017f\") "
	Aug 14 09:44:41 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:41.282403    1270 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:44:41 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:41.310296    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjnk\" (UniqueName: \"kubernetes.io/projected/f7b65b9d-1923-4b4e-b278-8ef5cecdd2d7-kube-api-access-4hjnk\") pod \"busybox\" (UID: \"f7b65b9d-1923-4b4e-b278-8ef5cecdd2d7\") "
	Aug 14 09:44:50 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:50.008085    1270 topology_manager.go:187] "Topology Admit Handler"
	Aug 14 09:44:50 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:50.202712    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69805726-3356-4374-ba02-ddee9ab9f4d8-tmp-dir\") pod \"metrics-server-7c784ccb57-57jxt\" (UID: \"69805726-3356-4374-ba02-ddee9ab9f4d8\") "
	Aug 14 09:44:50 embed-certs-20210814094325-6746 kubelet[1270]: I0814 09:44:50.202775    1270 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9fv\" (UniqueName: \"kubernetes.io/projected/69805726-3356-4374-ba02-ddee9ab9f4d8-kube-api-access-lf9fv\") pod \"metrics-server-7c784ccb57-57jxt\" (UID: \"69805726-3356-4374-ba02-ddee9ab9f4d8\") "
	Aug 14 09:44:50 embed-certs-20210814094325-6746 kubelet[1270]: E0814 09:44:50.934483    1270 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:44:50 embed-certs-20210814094325-6746 kubelet[1270]: E0814 09:44:50.934538    1270 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:44:50 embed-certs-20210814094325-6746 kubelet[1270]: E0814 09:44:50.934694    1270 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lf9fv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-57jxt_kube-system(69805726-3356-4374-ba02-ddee9ab9f4d8): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:44:50 embed-certs-20210814094325-6746 kubelet[1270]: E0814 09:44:50.934756    1270 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-57jxt" podUID=69805726-3356-4374-ba02-ddee9ab9f4d8
	
	* 
	* ==> storage-provisioner [e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495] <==
	* I0814 09:44:38.845861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:44:38.900202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:44:38.900268       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:44:38.905650       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:44:38.905818       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_a19f16ed-74ce-4475-88d4-5f694b233397!
	I0814 09:44:38.905825       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f392d183-97cd-470b-9d20-80a8a0fd4399", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210814094325-6746_a19f16ed-74ce-4475-88d4-5f694b233397 became leader
	I0814 09:44:39.006108       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_a19f16ed-74ce-4475-88d4-5f694b233397!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-57jxt
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 describe pod metrics-server-7c784ccb57-57jxt
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210814094325-6746 describe pod metrics-server-7c784ccb57-57jxt: exit status 1 (61.539387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-57jxt" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210814094325-6746 describe pod metrics-server-7c784ccb57-57jxt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (109.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210814094108-6746 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20210814094108-6746 --alsologtostderr -v=1: exit status 80 (1.786768465s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210814094108-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:48:45.623775  234750 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:48:45.623857  234750 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:48:45.623868  234750 out.go:311] Setting ErrFile to fd 2...
	I0814 09:48:45.623871  234750 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:48:45.623985  234750 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:48:45.624174  234750 out.go:305] Setting JSON to false
	I0814 09:48:45.624191  234750 mustload.go:65] Loading cluster: no-preload-20210814094108-6746
	I0814 09:48:45.624558  234750 config.go:177] Loaded profile config "no-preload-20210814094108-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:48:45.624996  234750 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:45.664129  234750 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:45.664976  234750 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210814094108-6746 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0814 09:48:45.667562  234750 out.go:177] * Pausing node no-preload-20210814094108-6746 ... 
	I0814 09:48:45.667585  234750 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:45.667854  234750 ssh_runner.go:149] Run: systemctl --version
	I0814 09:48:45.667892  234750 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:45.705259  234750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:45.800389  234750 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:48:45.808665  234750 pause.go:50] kubelet running: true
	I0814 09:48:45.808719  234750 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:48:45.914790  234750 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:48:45.914916  234750 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:48:45.984517  234750 cri.go:76] found id: "bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392"
	I0814 09:48:45.984538  234750 cri.go:76] found id: "86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992"
	I0814 09:48:45.984543  234750 cri.go:76] found id: "c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27"
	I0814 09:48:45.984563  234750 cri.go:76] found id: "d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f"
	I0814 09:48:45.984568  234750 cri.go:76] found id: "4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47"
	I0814 09:48:45.984575  234750 cri.go:76] found id: "8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17"
	I0814 09:48:45.984583  234750 cri.go:76] found id: "b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058"
	I0814 09:48:45.984589  234750 cri.go:76] found id: "4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769"
	I0814 09:48:45.984596  234750 cri.go:76] found id: "a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea"
	I0814 09:48:45.984603  234750 cri.go:76] found id: "ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e"
	I0814 09:48:45.984609  234750 cri.go:76] found id: ""
	I0814 09:48:45.984643  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:48:46.027025  234750 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","pid":4323,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920/rootfs","created":"2021-08-14T09:48:26.848992371Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-29ft7_c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d","pid":4850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27de7ceed67ed17d1b654ee4e6feebc
b913a0dbcaa091e9d930784451ce8b45d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d/rootfs","created":"2021-08-14T09:48:27.95699776Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-g5rms_b10653e3-cf89-4ad2-bfe0-02bd5e3ab136"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","pid":4383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4/rootfs","created":"2021-08-14T09:48:27.045038428Z","annotations":{"io.kubernetes.cri.container-typ
e":"sandbox","io.kubernetes.cri.sandbox-id":"2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_58508b3f-6c10-488b-b616-44a1cb8dfed8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","pid":3341,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6/rootfs","created":"2021-08-14T09:48:05.065001031Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210814094108-6746_b08bdb5366209a33dbc84d2eedb47b6d"},"owner":"root"},{"ociVersion":"1.0.
2-dev","id":"4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47","pid":3462,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47/rootfs","created":"2021-08-14T09:48:05.380958723Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769","pid":3450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769/roo
tfs","created":"2021-08-14T09:48:05.380928273Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","pid":4154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23/rootfs","created":"2021-08-14T09:48:26.200907819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wjwsl_101b3998-93d5-4c75-b83c-09c983f2f62a"},"owner":"root"},{"ociVersion
":"1.0.2-dev","id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","pid":3327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee/rootfs","created":"2021-08-14T09:48:05.064951226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210814094108-6746_c660633e79acbc2b90f8e45cbf17f7f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992","pid":4571,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992","rootfs":"/run/cont
ainerd/io.containerd.runtime.v2.task/k8s.io/86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992/rootfs","created":"2021-08-14T09:48:27.502114533Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17","pid":3471,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17/rootfs","created":"2021-08-14T09:48:05.381053556Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9
a0c3d70460fba"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","pid":4150,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97/rootfs","created":"2021-08-14T09:48:26.200892462Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-vtqtr_61de2c32-adcf-43c9-9f57-84213c6a9ff2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058","pid":3449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058","ro
otfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058/rootfs","created":"2021-08-14T09:48:05.381002408Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392","pid":4680,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392/rootfs","created":"2021-08-14T09:48:27.721008434Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2a79ace051eebbfe70618073983cefbc
dcb97bdac46296dc695a78a3da734ed4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","pid":4794,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7/rootfs","created":"2021-08-14T09:48:27.873061881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-vrv5k_805798a2-948f-4d0c-a548-07118d846033"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27","pid":4418,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c50fd0e548ee
b8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27/rootfs","created":"2021-08-14T09:48:27.129248849Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e","pid":5052,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e/rootfs","created":"2021-08-14T09:48:32.2050671Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kuberne
tes.cri.sandbox-id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","pid":4582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d/rootfs","created":"2021-08-14T09:48:27.417027814Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-rjgmp_ca6ddeeb-6afd-4408-8ac8-39df00ec7dea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","pid":3334,"status":"running","bundle":"/run/containerd/io.containerd.ru
ntime.v2.task/k8s.io/d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267/rootfs","created":"2021-08-14T09:48:05.065026281Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210814094108-6746_52ba5343cb2129d22c21dacb7ec19019"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f","pid":4354,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f/rootfs","created":"2021-08-14T09:48:26.900917654Z
","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","pid":3320,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba/rootfs","created":"2021-08-14T09:48:05.065017871Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210814094108-6746_4b4a2c856eba1cd56b9bea8eefe382ea"},"owner":"root"}]
	I0814 09:48:46.027245  234750 cri.go:113] list returned 20 containers
	I0814 09:48:46.027256  234750 cri.go:116] container: {ID:1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920 Status:running}
	I0814 09:48:46.027293  234750 cri.go:118] skipping 1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920 - not in ps
	I0814 09:48:46.027298  234750 cri.go:116] container: {ID:27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d Status:running}
	I0814 09:48:46.027305  234750 cri.go:118] skipping 27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d - not in ps
	I0814 09:48:46.027308  234750 cri.go:116] container: {ID:2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4 Status:running}
	I0814 09:48:46.027316  234750 cri.go:118] skipping 2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4 - not in ps
	I0814 09:48:46.027319  234750 cri.go:116] container: {ID:2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6 Status:running}
	I0814 09:48:46.027327  234750 cri.go:118] skipping 2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6 - not in ps
	I0814 09:48:46.027330  234750 cri.go:116] container: {ID:4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47 Status:running}
	I0814 09:48:46.027338  234750 cri.go:116] container: {ID:4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769 Status:running}
	I0814 09:48:46.027342  234750 cri.go:116] container: {ID:4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23 Status:running}
	I0814 09:48:46.027349  234750 cri.go:118] skipping 4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23 - not in ps
	I0814 09:48:46.027353  234750 cri.go:116] container: {ID:85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee Status:running}
	I0814 09:48:46.027360  234750 cri.go:118] skipping 85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee - not in ps
	I0814 09:48:46.027364  234750 cri.go:116] container: {ID:86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992 Status:running}
	I0814 09:48:46.027368  234750 cri.go:116] container: {ID:8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17 Status:running}
	I0814 09:48:46.027374  234750 cri.go:116] container: {ID:974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97 Status:running}
	I0814 09:48:46.027384  234750 cri.go:118] skipping 974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97 - not in ps
	I0814 09:48:46.027390  234750 cri.go:116] container: {ID:b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058 Status:running}
	I0814 09:48:46.027394  234750 cri.go:116] container: {ID:bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392 Status:running}
	I0814 09:48:46.027400  234750 cri.go:116] container: {ID:c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7 Status:running}
	I0814 09:48:46.027404  234750 cri.go:118] skipping c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7 - not in ps
	I0814 09:48:46.027410  234750 cri.go:116] container: {ID:c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27 Status:running}
	I0814 09:48:46.027414  234750 cri.go:116] container: {ID:ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e Status:running}
	I0814 09:48:46.027424  234750 cri.go:116] container: {ID:cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d Status:running}
	I0814 09:48:46.027432  234750 cri.go:118] skipping cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d - not in ps
	I0814 09:48:46.027436  234750 cri.go:116] container: {ID:d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267 Status:running}
	I0814 09:48:46.027440  234750 cri.go:118] skipping d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267 - not in ps
	I0814 09:48:46.027447  234750 cri.go:116] container: {ID:d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f Status:running}
	I0814 09:48:46.027453  234750 cri.go:116] container: {ID:f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba Status:running}
	I0814 09:48:46.027460  234750 cri.go:118] skipping f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba - not in ps
	I0814 09:48:46.027492  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47
	I0814 09:48:46.041739  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47 4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769
	I0814 09:48:46.053545  234750 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47 4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:48:46Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:48:46.329960  234750 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:48:46.339518  234750 pause.go:50] kubelet running: false
	I0814 09:48:46.339575  234750 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:48:46.435597  234750 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:48:46.435717  234750 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:48:46.503151  234750 cri.go:76] found id: "bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392"
	I0814 09:48:46.503184  234750 cri.go:76] found id: "86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992"
	I0814 09:48:46.503191  234750 cri.go:76] found id: "c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27"
	I0814 09:48:46.503200  234750 cri.go:76] found id: "d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f"
	I0814 09:48:46.503203  234750 cri.go:76] found id: "4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47"
	I0814 09:48:46.503209  234750 cri.go:76] found id: "8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17"
	I0814 09:48:46.503214  234750 cri.go:76] found id: "b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058"
	I0814 09:48:46.503219  234750 cri.go:76] found id: "4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769"
	I0814 09:48:46.503225  234750 cri.go:76] found id: "a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea"
	I0814 09:48:46.503243  234750 cri.go:76] found id: "ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e"
	I0814 09:48:46.503254  234750 cri.go:76] found id: ""
	I0814 09:48:46.503290  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:48:46.544836  234750 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","pid":4323,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920/rootfs","created":"2021-08-14T09:48:26.848992371Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-29ft7_c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d","pid":4850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27de7ceed67ed17d1b654ee4e6feebc
b913a0dbcaa091e9d930784451ce8b45d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d/rootfs","created":"2021-08-14T09:48:27.95699776Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-g5rms_b10653e3-cf89-4ad2-bfe0-02bd5e3ab136"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","pid":4383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4/rootfs","created":"2021-08-14T09:48:27.045038428Z","annotations":{"io.kubernetes.cri.container-typ
e":"sandbox","io.kubernetes.cri.sandbox-id":"2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_58508b3f-6c10-488b-b616-44a1cb8dfed8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","pid":3341,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6/rootfs","created":"2021-08-14T09:48:05.065001031Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210814094108-6746_b08bdb5366209a33dbc84d2eedb47b6d"},"owner":"root"},{"ociVersion":"1.0.
2-dev","id":"4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47","pid":3462,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47/rootfs","created":"2021-08-14T09:48:05.380958723Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769","pid":3450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769/root
fs","created":"2021-08-14T09:48:05.380928273Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","pid":4154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23/rootfs","created":"2021-08-14T09:48:26.200907819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wjwsl_101b3998-93d5-4c75-b83c-09c983f2f62a"},"owner":"root"},{"ociVersion"
:"1.0.2-dev","id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","pid":3327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee/rootfs","created":"2021-08-14T09:48:05.064951226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210814094108-6746_c660633e79acbc2b90f8e45cbf17f7f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992","pid":4571,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992","rootfs":"/run/conta
inerd/io.containerd.runtime.v2.task/k8s.io/86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992/rootfs","created":"2021-08-14T09:48:27.502114533Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17","pid":3471,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17/rootfs","created":"2021-08-14T09:48:05.381053556Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a
0c3d70460fba"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","pid":4150,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97/rootfs","created":"2021-08-14T09:48:26.200892462Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-vtqtr_61de2c32-adcf-43c9-9f57-84213c6a9ff2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058","pid":3449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058","roo
tfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058/rootfs","created":"2021-08-14T09:48:05.381002408Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392","pid":4680,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392/rootfs","created":"2021-08-14T09:48:27.721008434Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2a79ace051eebbfe70618073983cefbcd
cb97bdac46296dc695a78a3da734ed4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","pid":4794,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7/rootfs","created":"2021-08-14T09:48:27.873061881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-vrv5k_805798a2-948f-4d0c-a548-07118d846033"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27","pid":4418,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c50fd0e548eeb
8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27/rootfs","created":"2021-08-14T09:48:27.129248849Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e","pid":5052,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e/rootfs","created":"2021-08-14T09:48:32.2050671Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernet
es.cri.sandbox-id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","pid":4582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d/rootfs","created":"2021-08-14T09:48:27.417027814Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-rjgmp_ca6ddeeb-6afd-4408-8ac8-39df00ec7dea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","pid":3334,"status":"running","bundle":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267/rootfs","created":"2021-08-14T09:48:05.065026281Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210814094108-6746_52ba5343cb2129d22c21dacb7ec19019"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f","pid":4354,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f/rootfs","created":"2021-08-14T09:48:26.900917654Z"
,"annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","pid":3320,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba/rootfs","created":"2021-08-14T09:48:05.065017871Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210814094108-6746_4b4a2c856eba1cd56b9bea8eefe382ea"},"owner":"root"}]
	I0814 09:48:46.545177  234750 cri.go:113] list returned 20 containers
	I0814 09:48:46.545196  234750 cri.go:116] container: {ID:1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920 Status:running}
	I0814 09:48:46.545210  234750 cri.go:118] skipping 1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920 - not in ps
	I0814 09:48:46.545226  234750 cri.go:116] container: {ID:27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d Status:running}
	I0814 09:48:46.545233  234750 cri.go:118] skipping 27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d - not in ps
	I0814 09:48:46.545242  234750 cri.go:116] container: {ID:2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4 Status:running}
	I0814 09:48:46.545252  234750 cri.go:118] skipping 2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4 - not in ps
	I0814 09:48:46.545262  234750 cri.go:116] container: {ID:2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6 Status:running}
	I0814 09:48:46.545273  234750 cri.go:118] skipping 2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6 - not in ps
	I0814 09:48:46.545282  234750 cri.go:116] container: {ID:4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47 Status:paused}
	I0814 09:48:46.545293  234750 cri.go:122] skipping {4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47 paused}: state = "paused", want "running"
	I0814 09:48:46.545308  234750 cri.go:116] container: {ID:4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769 Status:running}
	I0814 09:48:46.545318  234750 cri.go:116] container: {ID:4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23 Status:running}
	I0814 09:48:46.545328  234750 cri.go:118] skipping 4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23 - not in ps
	I0814 09:48:46.545336  234750 cri.go:116] container: {ID:85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee Status:running}
	I0814 09:48:46.545349  234750 cri.go:118] skipping 85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee - not in ps
	I0814 09:48:46.545358  234750 cri.go:116] container: {ID:86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992 Status:running}
	I0814 09:48:46.545367  234750 cri.go:116] container: {ID:8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17 Status:running}
	I0814 09:48:46.545378  234750 cri.go:116] container: {ID:974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97 Status:running}
	I0814 09:48:46.545385  234750 cri.go:118] skipping 974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97 - not in ps
	I0814 09:48:46.545393  234750 cri.go:116] container: {ID:b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058 Status:running}
	I0814 09:48:46.545402  234750 cri.go:116] container: {ID:bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392 Status:running}
	I0814 09:48:46.545409  234750 cri.go:116] container: {ID:c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7 Status:running}
	I0814 09:48:46.545424  234750 cri.go:118] skipping c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7 - not in ps
	I0814 09:48:46.545432  234750 cri.go:116] container: {ID:c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27 Status:running}
	I0814 09:48:46.545439  234750 cri.go:116] container: {ID:ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e Status:running}
	I0814 09:48:46.545449  234750 cri.go:116] container: {ID:cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d Status:running}
	I0814 09:48:46.545460  234750 cri.go:118] skipping cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d - not in ps
	I0814 09:48:46.545466  234750 cri.go:116] container: {ID:d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267 Status:running}
	I0814 09:48:46.545476  234750 cri.go:118] skipping d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267 - not in ps
	I0814 09:48:46.545484  234750 cri.go:116] container: {ID:d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f Status:running}
	I0814 09:48:46.545493  234750 cri.go:116] container: {ID:f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba Status:running}
	I0814 09:48:46.545503  234750 cri.go:118] skipping f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba - not in ps
	I0814 09:48:46.545545  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769
	I0814 09:48:46.561413  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769 86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992
	I0814 09:48:46.573784  234750 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769 86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:48:46Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:48:47.114463  234750 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:48:47.124177  234750 pause.go:50] kubelet running: false
	I0814 09:48:47.124235  234750 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:48:47.215354  234750 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:48:47.215442  234750 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:48:47.283781  234750 cri.go:76] found id: "bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392"
	I0814 09:48:47.283807  234750 cri.go:76] found id: "86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992"
	I0814 09:48:47.283812  234750 cri.go:76] found id: "c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27"
	I0814 09:48:47.283816  234750 cri.go:76] found id: "d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f"
	I0814 09:48:47.283819  234750 cri.go:76] found id: "4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47"
	I0814 09:48:47.283823  234750 cri.go:76] found id: "8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17"
	I0814 09:48:47.283826  234750 cri.go:76] found id: "b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058"
	I0814 09:48:47.283833  234750 cri.go:76] found id: "4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769"
	I0814 09:48:47.283837  234750 cri.go:76] found id: "a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea"
	I0814 09:48:47.283862  234750 cri.go:76] found id: "ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e"
	I0814 09:48:47.283870  234750 cri.go:76] found id: ""
	I0814 09:48:47.283906  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:48:47.326443  234750 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","pid":4323,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920/rootfs","created":"2021-08-14T09:48:26.848992371Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-29ft7_c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d","pid":4850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27de7ceed67ed17d1b654ee4e6feebc
b913a0dbcaa091e9d930784451ce8b45d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d/rootfs","created":"2021-08-14T09:48:27.95699776Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-g5rms_b10653e3-cf89-4ad2-bfe0-02bd5e3ab136"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","pid":4383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4/rootfs","created":"2021-08-14T09:48:27.045038428Z","annotations":{"io.kubernetes.cri.container-typ
e":"sandbox","io.kubernetes.cri.sandbox-id":"2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_58508b3f-6c10-488b-b616-44a1cb8dfed8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","pid":3341,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6/rootfs","created":"2021-08-14T09:48:05.065001031Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210814094108-6746_b08bdb5366209a33dbc84d2eedb47b6d"},"owner":"root"},{"ociVersion":"1.0.
2-dev","id":"4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47","pid":3462,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47/rootfs","created":"2021-08-14T09:48:05.380958723Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769","pid":3450,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769/rootf
s","created":"2021-08-14T09:48:05.380928273Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","pid":4154,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23/rootfs","created":"2021-08-14T09:48:26.200907819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wjwsl_101b3998-93d5-4c75-b83c-09c983f2f62a"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","pid":3327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee/rootfs","created":"2021-08-14T09:48:05.064951226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210814094108-6746_c660633e79acbc2b90f8e45cbf17f7f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992","pid":4571,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992","rootfs":"/run/contai
nerd/io.containerd.runtime.v2.task/k8s.io/86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992/rootfs","created":"2021-08-14T09:48:27.502114533Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17","pid":3471,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17/rootfs","created":"2021-08-14T09:48:05.381053556Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0
c3d70460fba"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","pid":4150,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97/rootfs","created":"2021-08-14T09:48:26.200892462Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-vtqtr_61de2c32-adcf-43c9-9f57-84213c6a9ff2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058","pid":3449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058","root
fs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058/rootfs","created":"2021-08-14T09:48:05.381002408Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392","pid":4680,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392/rootfs","created":"2021-08-14T09:48:27.721008434Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2a79ace051eebbfe70618073983cefbcdc
b97bdac46296dc695a78a3da734ed4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","pid":4794,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7/rootfs","created":"2021-08-14T09:48:27.873061881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-vrv5k_805798a2-948f-4d0c-a548-07118d846033"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27","pid":4418,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c50fd0e548eeb8
d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27/rootfs","created":"2021-08-14T09:48:27.129248849Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e","pid":5052,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e/rootfs","created":"2021-08-14T09:48:32.2050671Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernete
s.cri.sandbox-id":"27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","pid":4582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d/rootfs","created":"2021-08-14T09:48:27.417027814Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-rjgmp_ca6ddeeb-6afd-4408-8ac8-39df00ec7dea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","pid":3334,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267/rootfs","created":"2021-08-14T09:48:05.065026281Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210814094108-6746_52ba5343cb2129d22c21dacb7ec19019"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f","pid":4354,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f/rootfs","created":"2021-08-14T09:48:26.900917654Z",
"annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","pid":3320,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba/rootfs","created":"2021-08-14T09:48:05.065017871Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210814094108-6746_4b4a2c856eba1cd56b9bea8eefe382ea"},"owner":"root"}]
	I0814 09:48:47.326668  234750 cri.go:113] list returned 20 containers
	I0814 09:48:47.326683  234750 cri.go:116] container: {ID:1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920 Status:running}
	I0814 09:48:47.326693  234750 cri.go:118] skipping 1401b1807f665323902c74b7f19dd771cb7adad5b76121facc9e86da74f80920 - not in ps
	I0814 09:48:47.326700  234750 cri.go:116] container: {ID:27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d Status:running}
	I0814 09:48:47.326707  234750 cri.go:118] skipping 27de7ceed67ed17d1b654ee4e6feebcb913a0dbcaa091e9d930784451ce8b45d - not in ps
	I0814 09:48:47.326710  234750 cri.go:116] container: {ID:2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4 Status:running}
	I0814 09:48:47.326715  234750 cri.go:118] skipping 2a79ace051eebbfe70618073983cefbcdcb97bdac46296dc695a78a3da734ed4 - not in ps
	I0814 09:48:47.326721  234750 cri.go:116] container: {ID:2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6 Status:running}
	I0814 09:48:47.326726  234750 cri.go:118] skipping 2e19cffd2eb99a97a5bc4bf328ff955c8c54394d860501e922d4bdeb641603c6 - not in ps
	I0814 09:48:47.326731  234750 cri.go:116] container: {ID:4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47 Status:paused}
	I0814 09:48:47.326737  234750 cri.go:122] skipping {4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47 paused}: state = "paused", want "running"
	I0814 09:48:47.326760  234750 cri.go:116] container: {ID:4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769 Status:paused}
	I0814 09:48:47.326771  234750 cri.go:122] skipping {4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769 paused}: state = "paused", want "running"
	I0814 09:48:47.326778  234750 cri.go:116] container: {ID:4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23 Status:running}
	I0814 09:48:47.326782  234750 cri.go:118] skipping 4bdaa3b79a93b325cc6710984201c1f33cd389c1d295649d332ccc7f77ea1e23 - not in ps
	I0814 09:48:47.326788  234750 cri.go:116] container: {ID:85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee Status:running}
	I0814 09:48:47.326792  234750 cri.go:118] skipping 85cff652cfd1f9a51cdeacdf0953bde7a173d468cb6a695ad3aa0ee154b1bcee - not in ps
	I0814 09:48:47.326796  234750 cri.go:116] container: {ID:86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992 Status:running}
	I0814 09:48:47.326800  234750 cri.go:116] container: {ID:8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17 Status:running}
	I0814 09:48:47.326806  234750 cri.go:116] container: {ID:974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97 Status:running}
	I0814 09:48:47.326811  234750 cri.go:118] skipping 974eae1472b45c6c7406f6426c9ce80a4f92add272f456328b85378b68632f97 - not in ps
	I0814 09:48:47.326817  234750 cri.go:116] container: {ID:b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058 Status:running}
	I0814 09:48:47.326821  234750 cri.go:116] container: {ID:bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392 Status:running}
	I0814 09:48:47.326827  234750 cri.go:116] container: {ID:c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7 Status:running}
	I0814 09:48:47.326832  234750 cri.go:118] skipping c110604bc22ff8b2f2c2ab8033a6a6a596709621f90c9c59d12b64da221b96b7 - not in ps
	I0814 09:48:47.326838  234750 cri.go:116] container: {ID:c50fd0e548eeb8d3ceb20189e07eb9a048e8b33f9555504be974dd57596f3f27 Status:running}
	I0814 09:48:47.326842  234750 cri.go:116] container: {ID:ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e Status:running}
	I0814 09:48:47.326850  234750 cri.go:116] container: {ID:cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d Status:running}
	I0814 09:48:47.326858  234750 cri.go:118] skipping cf60d1a3b76631663c53873d5e445ee2d719f58ddf5d94b299681f3e2e52ab6d - not in ps
	I0814 09:48:47.326862  234750 cri.go:116] container: {ID:d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267 Status:running}
	I0814 09:48:47.326868  234750 cri.go:118] skipping d3a70a813649d1e9215d16ef28fee8136dba7b344ae854480e37b8ba75739267 - not in ps
	I0814 09:48:47.326872  234750 cri.go:116] container: {ID:d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f Status:running}
	I0814 09:48:47.326878  234750 cri.go:116] container: {ID:f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba Status:running}
	I0814 09:48:47.326885  234750 cri.go:118] skipping f0e40e056b3cc36b8eec61cb0c6da235378fe678d925dad32a9a0c3d70460fba - not in ps
	I0814 09:48:47.326923  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992
	I0814 09:48:47.341242  234750 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992 8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17
	I0814 09:48:47.356294  234750 out.go:177] 
	W0814 09:48:47.356442  234750 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992 8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:48:47Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992 8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:48:47Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0814 09:48:47.356458  234750 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0814 09:48:47.359969  234750 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0814 09:48:47.361441  234750 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p no-preload-20210814094108-6746 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210814094108-6746
helpers_test.go:236: (dbg) docker inspect no-preload-20210814094108-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3",
	        "Created": "2021-08-14T09:41:10.066772897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:43:11.903170068Z",
	            "FinishedAt": "2021-08-14T09:43:09.639424897Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/hosts",
	        "LogPath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3-json.log",
	        "Name": "/no-preload-20210814094108-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210814094108-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210814094108-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210814094108-6746",
	                "Source": "/var/lib/docker/volumes/no-preload-20210814094108-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210814094108-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210814094108-6746",
	                "name.minikube.sigs.k8s.io": "no-preload-20210814094108-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "798b7d8418903154eb1fe4148c8ddb6cb61b065b9fed7dafddccb525405f4682",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32938"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32937"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32934"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32935"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/798b7d841890",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210814094108-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f79f2f866d42"
	                    ],
	                    "NetworkID": "b3ba1e9c1cb05c8c1a4d88161faa9897d77b38de1b24b25543acd0ac824e106d",
	                    "EndpointID": "61b51a9ef6e1441e83a1f1e8d1f9601c5c1b66ee5f74de42a1a60a8bfd02b019",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210814094108-6746 -n no-preload-20210814094108-6746
E0814 09:48:51.402929    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210814094108-6746 -n no-preload-20210814094108-6746: exit status 2 (17.303487076s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:49:04.712822  235104 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210814094108-6746 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210814094108-6746 logs -n 25: exit status 110 (13.228104301s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:02 UTC | Sat, 14 Aug 2021 09:41:03 UTC |
	|         | -n 25                                             |                                     |         |         |                               |                               |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:03 UTC | Sat, 14 Aug 2021 09:41:06 UTC |
	|         | --alsologtostderr -v=5                            |                                     |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:39:02 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                     |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                     |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                     |         |         |                               |                               |
	|         | --keep-context=false                              |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                     |         |         |                               |                               |
	| profile | list --output json                                | minikube                            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:06 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:07 UTC | Sat, 14 Aug 2021 09:41:08 UTC |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:16 UTC | Sat, 14 Aug 2021 09:41:17 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:17 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                     |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                     |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                     |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                     |         |         |                               |                               |
	|         | --keep-context=false                              |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                     |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                     |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:18 UTC | Sat, 14 Aug 2021 09:43:19 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:20 UTC | Sat, 14 Aug 2021 09:43:21 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:21 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                     |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                   | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:50 UTC | Sat, 14 Aug 2021 09:44:51 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:51 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:48:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:48:45 UTC | Sat, 14 Aug 2021 09:48:45 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                     |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:45:12
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:45:12.676514  219213 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:45:12.676583  219213 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:12.676609  219213 out.go:311] Setting ErrFile to fd 2...
	I0814 09:45:12.676613  219213 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:12.676721  219213 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:45:12.677016  219213 out.go:305] Setting JSON to false
	I0814 09:45:12.712595  219213 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5275,"bootTime":1628929038,"procs":273,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:45:12.712697  219213 start.go:121] virtualization: kvm guest
	I0814 09:45:12.715906  219213 out.go:177] * [embed-certs-20210814094325-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:45:12.717448  219213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:45:12.716056  219213 notify.go:169] Checking for updates...
	I0814 09:45:12.719042  219213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:45:12.720597  219213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:45:12.722183  219213 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:45:12.722638  219213 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:45:12.723036  219213 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:45:12.771441  219213 docker.go:132] docker version: linux-19.03.15
	I0814 09:45:12.771543  219213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:45:12.851100  219213 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:45:12.80695312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:45:12.851195  219213 docker.go:244] overlay module found
	I0814 09:45:12.853255  219213 out.go:177] * Using the docker driver based on existing profile
	I0814 09:45:12.853279  219213 start.go:278] selected driver: docker
	I0814 09:45:12.853284  219213 start.go:751] validating driver "docker" against &{Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:45:12.853368  219213 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:45:12.853401  219213 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:45:12.853419  219213 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:45:12.854792  219213 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:45:12.855582  219213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:45:12.934182  219213 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:45:12.890723264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0814 09:45:12.934305  219213 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:45:12.934347  219213 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:45:12.936022  219213 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:45:12.936116  219213 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:45:12.936138  219213 cni.go:93] Creating CNI manager for ""
	I0814 09:45:12.936144  219213 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:45:12.936155  219213 start_flags.go:277] config:
	{Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:f
alse ExtraDisks:0}
	I0814 09:45:12.938131  219213 out.go:177] * Starting control plane node embed-certs-20210814094325-6746 in cluster embed-certs-20210814094325-6746
	I0814 09:45:12.938178  219213 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:45:12.939564  219213 out.go:177] * Pulling base image ...
	I0814 09:45:12.939606  219213 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:45:12.939663  219213 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:45:12.939695  219213 cache.go:56] Caching tarball of preloaded images
	I0814 09:45:12.939704  219213 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:45:12.939897  219213 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:45:12.939915  219213 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:45:12.940121  219213 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/config.json ...
	I0814 09:45:13.016192  219213 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:45:13.016219  219213 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:45:13.016239  219213 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:45:13.016281  219213 start.go:313] acquiring machines lock for embed-certs-20210814094325-6746: {Name:mk9d63dfbf0330e30e75ccffedf22e0c93e8bd0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:45:13.016378  219213 start.go:317] acquired machines lock for "embed-certs-20210814094325-6746" in 75.307µs
	I0814 09:45:13.016402  219213 start.go:93] Skipping create...Using existing machine configuration
	I0814 09:45:13.016410  219213 fix.go:55] fixHost starting: 
	I0814 09:45:13.016680  219213 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:45:13.054977  219213 fix.go:108] recreateIfNeeded on embed-certs-20210814094325-6746: state=Stopped err=<nil>
	W0814 09:45:13.055025  219213 fix.go:134] unexpected machine state, will restart: <nil>
	I0814 09:45:10.255829  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:12.256243  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:14.755147  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:13.057291  219213 out.go:177] * Restarting existing docker container for "embed-certs-20210814094325-6746" ...
	I0814 09:45:13.057358  219213 cli_runner.go:115] Run: docker start embed-certs-20210814094325-6746
	I0814 09:45:14.423199  219213 cli_runner.go:168] Completed: docker start embed-certs-20210814094325-6746: (1.365811138s)
	I0814 09:45:14.423277  219213 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:45:14.464030  219213 kic.go:420] container "embed-certs-20210814094325-6746" state is running.
	I0814 09:45:14.464412  219213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:45:14.503527  219213 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/config.json ...
	I0814 09:45:14.503726  219213 machine.go:88] provisioning docker machine ...
	I0814 09:45:14.503761  219213 ubuntu.go:169] provisioning hostname "embed-certs-20210814094325-6746"
	I0814 09:45:14.503808  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:14.540944  219213 main.go:130] libmachine: Using SSH client type: native
	I0814 09:45:14.541187  219213 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0814 09:45:14.541212  219213 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210814094325-6746 && echo "embed-certs-20210814094325-6746" | sudo tee /etc/hostname
	I0814 09:45:14.541692  219213 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34934->127.0.0.1:32948: read: connection reset by peer
	I0814 09:45:16.755965  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:18.756563  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:17.691853  219213 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210814094325-6746
	
	I0814 09:45:17.691924  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:17.730138  219213 main.go:130] libmachine: Using SSH client type: native
	I0814 09:45:17.730291  219213 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0814 09:45:17.730312  219213 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210814094325-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210814094325-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210814094325-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:45:17.851952  219213 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:45:17.851978  219213 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:45:17.851998  219213 ubuntu.go:177] setting up certificates
	I0814 09:45:17.852008  219213 provision.go:83] configureAuth start
	I0814 09:45:17.852050  219213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:45:17.892638  219213 provision.go:138] copyHostCerts
	I0814 09:45:17.892706  219213 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:45:17.892717  219213 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:45:17.892771  219213 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:45:17.892905  219213 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:45:17.892918  219213 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:45:17.892941  219213 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:45:17.893001  219213 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:45:17.893008  219213 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:45:17.893025  219213 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:45:17.893076  219213 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210814094325-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210814094325-6746]
	I0814 09:45:18.127966  219213 provision.go:172] copyRemoteCerts
	I0814 09:45:18.128031  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:45:18.128071  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.166720  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.256528  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:45:18.272768  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0814 09:45:18.287782  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:45:18.302468  219213 provision.go:86] duration metric: configureAuth took 450.451654ms
	I0814 09:45:18.302485  219213 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:45:18.302626  219213 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:45:18.302636  219213 machine.go:91] provisioned docker machine in 3.79889634s
	I0814 09:45:18.302643  219213 start.go:267] post-start starting for "embed-certs-20210814094325-6746" (driver="docker")
	I0814 09:45:18.302648  219213 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:45:18.302680  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:45:18.302723  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.341490  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.431525  219213 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:45:18.433991  219213 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:45:18.434015  219213 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:45:18.434023  219213 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:45:18.434028  219213 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:45:18.434036  219213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:45:18.434095  219213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:45:18.434172  219213 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:45:18.434268  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:45:18.440281  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:45:18.455342  219213 start.go:270] post-start completed in 152.690241ms
	I0814 09:45:18.455386  219213 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:45:18.455419  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.493843  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.580556  219213 fix.go:57] fixHost completed within 5.564142879s
	I0814 09:45:18.580580  219213 start.go:80] releasing machines lock for "embed-certs-20210814094325-6746", held for 5.564189475s
	I0814 09:45:18.580650  219213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:45:18.618680  219213 ssh_runner.go:149] Run: systemctl --version
	I0814 09:45:18.618720  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.618756  219213 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:45:18.618814  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.660557  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.660558  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.766126  219213 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:45:18.776888  219213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:45:18.785271  219213 docker.go:153] disabling docker service ...
	I0814 09:45:18.785321  219213 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:45:18.793782  219213 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:45:18.801548  219213 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:45:18.860901  219213 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:45:18.914390  219213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:45:18.922515  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:45:18.933706  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:45:18.945694  219213 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:45:18.951209  219213 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:45:18.951261  219213 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:45:18.957754  219213 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:45:18.963238  219213 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:45:19.017015  219213 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:45:19.086303  219213 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:45:19.086363  219213 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:45:19.089913  219213 start.go:413] Will wait 60s for crictl version
	I0814 09:45:19.089968  219213 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:45:19.111768  219213 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:45:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:45:21.255518  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:23.755237  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:25.755360  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:28.255803  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:30.158828  219213 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:45:30.214702  219213 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:45:30.214764  219213 ssh_runner.go:149] Run: containerd --version
	I0814 09:45:30.237176  219213 ssh_runner.go:149] Run: containerd --version
	I0814 09:45:30.263100  219213 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:45:30.263185  219213 cli_runner.go:115] Run: docker network inspect embed-certs-20210814094325-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:45:30.300976  219213 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:45:30.304083  219213 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:45:30.312845  219213 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:45:30.312901  219213 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:45:30.334193  219213 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:45:30.334210  219213 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:45:30.334241  219213 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:45:30.354604  219213 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:45:30.354620  219213 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:45:30.354663  219213 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:45:30.374680  219213 cni.go:93] Creating CNI manager for ""
	I0814 09:45:30.374706  219213 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:45:30.374715  219213 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:45:30.374730  219213 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210814094325-6746 NodeName:embed-certs-20210814094325-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:45:30.374834  219213 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210814094325-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:45:30.374912  219213 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210814094325-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:45:30.374963  219213 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:45:30.381148  219213 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:45:30.381205  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:45:30.387016  219213 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0814 09:45:30.398193  219213 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:45:30.409326  219213 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I0814 09:45:30.420343  219213 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:45:30.422902  219213 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:45:30.430717  219213 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746 for IP: 192.168.58.2
	I0814 09:45:30.430760  219213 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:45:30.430776  219213 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:45:30.430824  219213 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.key
	I0814 09:45:30.430848  219213 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key.cee25041
	I0814 09:45:30.430866  219213 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key
	I0814 09:45:30.430981  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:45:30.431030  219213 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:45:30.431046  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:45:30.431076  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:45:30.431109  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:45:30.431139  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:45:30.431203  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:45:30.432504  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:45:30.447364  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:45:30.462451  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:45:30.477633  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:45:30.492751  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:45:30.507832  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:45:30.522671  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:45:30.537713  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:45:30.552860  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:45:30.568110  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:45:30.582869  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:45:30.597583  219213 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:45:30.608341  219213 ssh_runner.go:149] Run: openssl version
	I0814 09:45:30.612731  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:45:30.619316  219213 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:45:30.622063  219213 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:45:30.622099  219213 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:45:30.626527  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:45:30.632449  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:45:30.638987  219213 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:45:30.641736  219213 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:45:30.641768  219213 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:45:30.645950  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:45:30.651801  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:45:30.658203  219213 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:45:30.660858  219213 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:45:30.660898  219213 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:45:30.665211  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:45:30.671014  219213 kubeadm.go:390] StartCluster: {Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:45:30.671088  219213 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:45:30.671126  219213 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:45:30.692470  219213 cri.go:76] found id: "55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5"
	I0814 09:45:30.692491  219213 cri.go:76] found id: "e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495"
	I0814 09:45:30.692497  219213 cri.go:76] found id: "3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9"
	I0814 09:45:30.692503  219213 cri.go:76] found id: "1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138"
	I0814 09:45:30.692510  219213 cri.go:76] found id: "54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae"
	I0814 09:45:30.692516  219213 cri.go:76] found id: "7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147"
	I0814 09:45:30.692521  219213 cri.go:76] found id: "9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38"
	I0814 09:45:30.692532  219213 cri.go:76] found id: "68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3"
	I0814 09:45:30.692537  219213 cri.go:76] found id: ""
	I0814 09:45:30.692568  219213 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:45:30.704880  219213 cri.go:103] JSON = null
	W0814 09:45:30.704915  219213 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0814 09:45:30.704947  219213 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:45:30.710859  219213 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0814 09:45:30.710880  219213 kubeadm.go:600] restartCluster start
	I0814 09:45:30.710914  219213 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0814 09:45:30.716486  219213 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:30.717211  219213 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210814094325-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:45:30.717396  219213 kubeconfig.go:128] "embed-certs-20210814094325-6746" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig - will repair!
	I0814 09:45:30.717822  219213 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:45:30.720063  219213 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:45:30.725765  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:30.725822  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:30.737276  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:30.937645  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:30.937706  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:30.951639  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.137902  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.137967  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.151090  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.338360  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.338448  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.352317  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.537501  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.537575  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.551094  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.738330  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.738396  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.751356  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.937610  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.937675  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.951045  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.138314  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.138381  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.153876  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.338137  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.338200  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.351552  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.537809  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.537865  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.550664  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:30.256668  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:32.763916  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:32.738135  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.738215  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.751149  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.938390  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.938455  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.951950  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.138244  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.138334  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.151217  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.337437  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.337501  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.350794  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.537891  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.537956  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.551203  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.737426  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.737490  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.750428  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.750448  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.750486  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.763161  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.763181  219213 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0814 09:45:33.763188  219213 kubeadm.go:1032] stopping kube-system containers ...
	I0814 09:45:33.763199  219213 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:45:33.763244  219213 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:45:33.822662  219213 cri.go:76] found id: "55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5"
	I0814 09:45:33.822682  219213 cri.go:76] found id: "e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495"
	I0814 09:45:33.822688  219213 cri.go:76] found id: "3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9"
	I0814 09:45:33.822692  219213 cri.go:76] found id: "1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138"
	I0814 09:45:33.822696  219213 cri.go:76] found id: "54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae"
	I0814 09:45:33.822700  219213 cri.go:76] found id: "7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147"
	I0814 09:45:33.822704  219213 cri.go:76] found id: "9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38"
	I0814 09:45:33.822708  219213 cri.go:76] found id: "68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3"
	I0814 09:45:33.822713  219213 cri.go:76] found id: ""
	I0814 09:45:33.822718  219213 cri.go:221] Stopping containers: [55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5 e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495 3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9 1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138 54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae 7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147 9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38 68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3]
	I0814 09:45:33.822767  219213 ssh_runner.go:149] Run: which crictl
	I0814 09:45:33.825398  219213 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5 e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495 3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9 1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138 54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae 7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147 9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38 68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3
	I0814 09:45:33.847462  219213 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0814 09:45:33.856441  219213 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:45:33.862510  219213 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 14 09:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 14 09:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug 14 09:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 14 09:43 /etc/kubernetes/scheduler.conf
	
	I0814 09:45:33.862560  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 09:45:33.868678  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 09:45:33.874650  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 09:45:33.880408  219213 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.880451  219213 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:45:33.885968  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 09:45:33.891591  219213 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.891625  219213 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:45:33.897228  219213 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:45:33.903147  219213 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0814 09:45:33.903164  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:33.957489  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.641477  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.769485  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.840519  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.907080  219213 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:45:34.907146  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:35.420191  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:35.920312  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:36.420275  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:36.920198  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:37.420399  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:35.255536  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:37.256069  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:39.755325  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:37.920245  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:38.419906  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:38.920456  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:39.419901  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:39.919881  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:40.419605  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:40.920452  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:41.420411  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:41.503885  219213 api_server.go:70] duration metric: took 6.596804894s to wait for apiserver process to appear ...
	I0814 09:45:41.503915  219213 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:45:41.503927  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:41.504322  219213 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0814 09:45:42.005044  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:41.755488  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:43.756277  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:45.704410  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:45:45.704447  219213 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:45:46.004588  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:46.008955  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:45:46.008981  219213 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:45:46.504491  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:46.510082  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:45:46.510106  219213 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:45:47.004597  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:47.009929  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:45:47.017789  219213 api_server.go:139] control plane version: v1.21.3
	I0814 09:45:47.017813  219213 api_server.go:129] duration metric: took 5.513890902s to wait for apiserver health ...
	I0814 09:45:47.017825  219213 cni.go:93] Creating CNI manager for ""
	I0814 09:45:47.017833  219213 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:45:47.020350  219213 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:45:47.020399  219213 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:45:47.024074  219213 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:45:47.024091  219213 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:45:47.036846  219213 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:45:47.527366  219213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:45:47.537910  219213 system_pods.go:59] 9 kube-system pods found
	I0814 09:45:47.537945  219213 system_pods.go:61] "coredns-558bd4d5db-r9f9m" [a95b5bd5-9099-4c69-a77e-d319f3db017f] Running
	I0814 09:45:47.537954  219213 system_pods.go:61] "etcd-embed-certs-20210814094325-6746" [8a290f3e-9865-416a-a8b5-8185ce927699] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 09:45:47.537959  219213 system_pods.go:61] "kindnet-mmp5r" [77fdb837-eeb8-412b-a20e-ce5d6d198691] Running
	I0814 09:45:47.537963  219213 system_pods.go:61] "kube-apiserver-embed-certs-20210814094325-6746" [662d7fb3-b141-4d8e-a122-f58805e6b74a] Running
	I0814 09:45:47.537967  219213 system_pods.go:61] "kube-controller-manager-embed-certs-20210814094325-6746" [d5e10fb1-ef80-41f7-a5c6-8fb7ea20d7d4] Running
	I0814 09:45:47.537971  219213 system_pods.go:61] "kube-proxy-mgvn2" [2d2198aa-7650-47ff-81cc-7b3a13d11ac6] Running
	I0814 09:45:47.537976  219213 system_pods.go:61] "kube-scheduler-embed-certs-20210814094325-6746" [3a19f542-979a-4726-b47d-bebbfa29cfac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:45:47.537983  219213 system_pods.go:61] "metrics-server-7c784ccb57-57jxt" [69805726-3356-4374-ba02-ddee9ab9f4d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:45:47.538015  219213 system_pods.go:61] "storage-provisioner" [38121728-bc1a-4972-b44b-f156a068aea0] Running
	I0814 09:45:47.538020  219213 system_pods.go:74] duration metric: took 10.633349ms to wait for pod list to return data ...
	I0814 09:45:47.538026  219213 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:45:47.541194  219213 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:45:47.541218  219213 node_conditions.go:123] node cpu capacity is 8
	I0814 09:45:47.541229  219213 node_conditions.go:105] duration metric: took 3.196426ms to run NodePressure ...
	I0814 09:45:47.541243  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:46.255616  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:48.256248  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:48.104081  219213 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0814 09:45:48.108233  219213 kubeadm.go:746] kubelet initialised
	I0814 09:45:48.108257  219213 kubeadm.go:747] duration metric: took 4.148526ms waiting for restarted kubelet to initialise ...
	I0814 09:45:48.108265  219213 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:45:48.112670  219213 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:48.123242  219213 pod_ready.go:92] pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:48.123261  219213 pod_ready.go:81] duration metric: took 10.570463ms waiting for pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:48.123269  219213 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:50.133040  219213 pod_ready.go:102] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:52.631806  219213 pod_ready.go:102] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:50.755599  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:52.755700  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:53.131092  219213 pod_ready.go:92] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:53.131121  219213 pod_ready.go:81] duration metric: took 5.00784485s waiting for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:53.131136  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:55.141124  219213 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:55.256189  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:57.755425  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:59.758287  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:57.640595  219213 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:59.140433  219213 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:59.140459  219213 pod_ready.go:81] duration metric: took 6.00931561s waiting for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.140470  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.144444  219213 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:59.144461  219213 pod_ready.go:81] duration metric: took 3.983627ms waiting for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.144472  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mgvn2" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.148266  219213 pod_ready.go:92] pod "kube-proxy-mgvn2" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:59.148282  219213 pod_ready.go:81] duration metric: took 3.80255ms waiting for pod "kube-proxy-mgvn2" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.148292  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:46:00.656204  219213 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:46:00.656230  219213 pod_ready.go:81] duration metric: took 1.507930198s waiting for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:46:00.656240  219213 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace to be "Ready" ...
	I0814 09:46:02.255586  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:04.255652  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:02.667460  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:05.164740  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:06.756092  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:08.757713  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:07.667796  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:10.164641  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:12.166888  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:11.256110  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:13.755045  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:14.665465  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:17.165469  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:15.755867  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:17.757681  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:19.664971  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:21.665400  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:20.255902  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:22.756302  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:24.165735  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:26.665281  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:25.256153  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:27.756182  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:29.164962  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:31.165841  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:30.255896  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:32.756408  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:33.165897  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:35.705688  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:35.256011  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:37.756706  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:38.165458  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:40.665526  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:40.255925  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:42.755119  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:44.756095  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:42.665761  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:45.164449  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:47.164767  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:47.255503  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:49.755534  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:49.166034  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:51.667107  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:52.255090  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:54.256100  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:54.165651  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:56.665545  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:56.755593  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:58.756359  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:59.165034  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:01.665058  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:01.255956  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:03.755700  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:03.665786  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:05.665871  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:05.756151  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:07.757044  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:08.165326  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:10.664757  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:10.255756  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:12.256098  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:14.755338  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:12.665136  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:15.165593  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:17.255407  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:19.255863  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:17.665131  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:20.165233  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:21.755368  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:24.255557  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:22.665620  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:24.665692  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:27.165245  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:26.755503  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:28.756323  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:29.165429  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:31.165564  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:31.255941  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:33.755230  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:33.664908  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:35.664963  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:35.755489  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:38.255591  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:37.665745  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:40.165232  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:42.169685  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:40.755707  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:42.757519  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:44.665645  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:47.164059  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:45.255953  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:47.756058  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:49.756177  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:49.165007  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:51.165044  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:52.254962  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:54.256013  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:54.751367  198227 pod_ready.go:81] duration metric: took 4m0.004529083s waiting for pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace to be "Ready" ...
	E0814 09:47:54.751388  198227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 09:47:54.751409  198227 pod_ready.go:38] duration metric: took 4m10.618186425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:47:54.751435  198227 kubeadm.go:604] restartCluster took 4m26.295245598s
	W0814 09:47:54.751568  198227 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0814 09:47:54.751599  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0814 09:47:53.165474  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:55.165815  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:57.948472  198227 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.196842047s)
	I0814 09:47:57.948535  198227 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0814 09:47:57.958148  198227 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:47:57.958208  198227 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:47:57.980246  198227 cri.go:76] found id: ""
	I0814 09:47:57.980305  198227 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:47:57.986736  198227 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:47:57.986783  198227 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:47:57.992761  198227 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:47:57.992809  198227 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:47:58.246319  198227 out.go:204]   - Generating certificates and keys ...
	I0814 09:47:58.869652  198227 out.go:204]   - Booting up control plane ...
	I0814 09:47:57.667421  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:00.165338  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:02.665332  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:05.165237  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:11.920868  198227 out.go:204]   - Configuring RBAC rules ...
	I0814 09:48:12.333322  198227 cni.go:93] Creating CNI manager for ""
	I0814 09:48:12.333346  198227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:48:07.665749  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:10.165312  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:12.334944  198227 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:48:12.335010  198227 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:48:12.338427  198227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0814 09:48:12.338447  198227 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:48:12.350499  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:48:12.492357  198227 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:48:12.492429  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:12.492429  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=no-preload-20210814094108-6746 minikube.k8s.io/updated_at=2021_08_14T09_48_12_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:12.507107  198227 ops.go:34] apiserver oom_adj: -16
	I0814 09:48:12.552786  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:13.132225  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:13.633067  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:14.132170  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:14.632217  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:15.132235  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:12.665519  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:15.164821  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:17.166787  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:15.632749  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:16.132739  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:16.632122  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:17.132912  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:17.632125  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:18.132897  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:18.632617  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:19.132575  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:19.632602  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:20.132757  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:19.665114  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:22.164949  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:20.632671  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:21.132469  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:21.632253  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:22.132187  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:22.632401  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:23.132901  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:23.633122  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:24.132310  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:24.185162  198227 kubeadm.go:985] duration metric: took 11.692791811s to wait for elevateKubeSystemPrivileges.
	I0814 09:48:24.185192  198227 kubeadm.go:392] StartCluster complete in 4m55.788730202s
	I0814 09:48:24.185214  198227 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:48:24.185304  198227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:48:24.186142  198227 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:48:24.701063  198227 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210814094108-6746" rescaled to 1
	I0814 09:48:24.701114  198227 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0814 09:48:24.702623  198227 out.go:177] * Verifying Kubernetes components...
	I0814 09:48:24.702671  198227 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:48:24.701153  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:48:24.701174  198227 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0814 09:48:24.702782  198227 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702795  198227 addons.go:59] Setting metrics-server=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702805  198227 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210814094108-6746"
	I0814 09:48:24.702804  198227 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702826  198227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210814094108-6746"
	I0814 09:48:24.702785  198227 addons.go:59] Setting dashboard=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702863  198227 addons.go:135] Setting addon dashboard=true in "no-preload-20210814094108-6746"
	W0814 09:48:24.702878  198227 addons.go:147] addon dashboard should already be in state true
	I0814 09:48:24.702806  198227 addons.go:135] Setting addon metrics-server=true in "no-preload-20210814094108-6746"
	I0814 09:48:24.702910  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	W0814 09:48:24.702918  198227 addons.go:147] addon metrics-server should already be in state true
	I0814 09:48:24.702951  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:24.701371  198227 config.go:177] Loaded profile config "no-preload-20210814094108-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	W0814 09:48:24.702812  198227 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:48:24.703062  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:24.703172  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.703404  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.703453  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.703512  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.714164  198227 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210814094108-6746" to be "Ready" ...
	I0814 09:48:24.716834  198227 node_ready.go:49] node "no-preload-20210814094108-6746" has status "Ready":"True"
	I0814 09:48:24.716855  198227 node_ready.go:38] duration metric: took 2.654773ms waiting for node "no-preload-20210814094108-6746" to be "Ready" ...
	I0814 09:48:24.716870  198227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:48:24.721408  198227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:24.726246  198227 pod_ready.go:92] pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:24.726265  198227 pod_ready.go:81] duration metric: took 4.82012ms waiting for pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:24.726277  198227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:24.766130  198227 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0814 09:48:24.767653  198227 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0814 09:48:24.767719  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0814 09:48:24.767732  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0814 09:48:24.767787  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.771481  198227 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210814094108-6746"
	W0814 09:48:24.771507  198227 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:48:24.771534  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:24.772197  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.774269  198227 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0814 09:48:24.774359  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 09:48:24.774376  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0814 09:48:24.774439  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.779524  198227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:48:24.779669  198227 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:48:24.779682  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:48:24.779744  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.805000  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:48:24.828072  198227 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:48:24.828096  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:48:24.828161  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.832721  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:24.847396  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:24.859514  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:24.876349  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:25.114534  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:48:25.114585  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:48:25.115333  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0814 09:48:25.115352  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0814 09:48:25.129656  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0814 09:48:25.129681  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0814 09:48:25.202169  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0814 09:48:25.202191  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0814 09:48:25.215767  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 09:48:25.215790  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0814 09:48:25.218133  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0814 09:48:25.218155  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0814 09:48:25.232121  198227 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0814 09:48:25.312880  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 09:48:25.312905  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0814 09:48:25.321275  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0814 09:48:25.321299  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0814 09:48:25.330492  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:48:25.330518  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0814 09:48:25.336826  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0814 09:48:25.336849  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0814 09:48:25.408475  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:48:25.414709  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0814 09:48:25.414733  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0814 09:48:25.435482  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0814 09:48:25.435509  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0814 09:48:25.504284  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:48:25.504309  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0814 09:48:25.516637  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:48:26.201697  198227 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210814094108-6746"
	I0814 09:48:26.617400  198227 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.100703883s)
	I0814 09:48:24.664544  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:26.665266  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:26.619426  198227 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0814 09:48:26.619465  198227 addons.go:344] enableAddons completed in 1.91829394s
	I0814 09:48:26.807531  198227 pod_ready.go:102] pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:29.236143  198227 pod_ready.go:102] pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:30.236508  198227 pod_ready.go:92] pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:30.236535  198227 pod_ready.go:81] duration metric: took 5.510248695s waiting for pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.236549  198227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.240532  198227 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:30.240551  198227 pod_ready.go:81] duration metric: took 3.988507ms waiting for pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.240564  198227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.243995  198227 pod_ready.go:92] pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:30.244008  198227 pod_ready.go:81] duration metric: took 3.436745ms waiting for pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.244015  198227 pod_ready.go:38] duration metric: took 5.527130406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:48:30.244030  198227 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:48:30.244064  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:48:30.266473  198227 api_server.go:70] duration metric: took 5.565330794s to wait for apiserver process to appear ...
	I0814 09:48:30.266497  198227 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:48:30.266508  198227 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 09:48:30.270876  198227 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0814 09:48:30.271613  198227 api_server.go:139] control plane version: v1.22.0-rc.0
	I0814 09:48:30.271629  198227 api_server.go:129] duration metric: took 5.127102ms to wait for apiserver health ...
	I0814 09:48:30.271637  198227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:48:30.276088  198227 system_pods.go:59] 10 kube-system pods found
	I0814 09:48:30.276114  198227 system_pods.go:61] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.276121  198227 system_pods.go:61] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.276126  198227 system_pods.go:61] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.276132  198227 system_pods.go:61] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0814 09:48:30.276139  198227 system_pods.go:61] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.276145  198227 system_pods.go:61] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.276152  198227 system_pods.go:61] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.276160  198227 system_pods.go:61] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.276165  198227 system_pods.go:61] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.276173  198227 system_pods.go:61] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.276182  198227 system_pods.go:74] duration metric: took 4.540474ms to wait for pod list to return data ...
	I0814 09:48:30.276192  198227 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:48:30.278320  198227 default_sa.go:45] found service account: "default"
	I0814 09:48:30.278337  198227 default_sa.go:55] duration metric: took 2.139851ms for default service account to be created ...
	I0814 09:48:30.278343  198227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:48:30.283291  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:30.283316  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.283323  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.283331  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.283337  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0814 09:48:30.283355  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.283360  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.283366  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.283373  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.283380  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.283388  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.283406  198227 retry.go:31] will retry after 305.063636ms: missing components: kube-dns, kube-proxy
	I0814 09:48:30.607116  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:30.607150  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.607159  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.607167  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.607181  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0814 09:48:30.607192  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.607207  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.607220  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.607231  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.607245  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.607257  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.607278  198227 retry.go:31] will retry after 338.212508ms: missing components: kube-dns, kube-proxy
	I0814 09:48:30.951806  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:30.951844  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.951855  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.951861  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.951867  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Running
	I0814 09:48:30.951871  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.951877  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.951883  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.951892  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.951898  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.951905  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.951920  198227 retry.go:31] will retry after 378.459802ms: missing components: kube-dns, kube-proxy
	I0814 09:48:31.336914  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:31.336951  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.336961  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.336970  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:31.336978  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Running
	I0814 09:48:31.336987  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:31.336998  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:31.337008  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Running
	I0814 09:48:31.337018  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:31.337029  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:31.337038  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:31.337063  198227 retry.go:31] will retry after 469.882201ms: missing components: kube-dns
	I0814 09:48:31.812707  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:31.812738  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.812748  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.812753  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:31.812758  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Running
	I0814 09:48:31.812762  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:31.812769  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:31.812777  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Running
	I0814 09:48:31.812784  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:31.812819  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:31.812832  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:31.812840  198227 system_pods.go:126] duration metric: took 1.534492166s to wait for k8s-apps to be running ...
	I0814 09:48:31.812850  198227 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:48:31.812891  198227 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:48:31.822119  198227 system_svc.go:56] duration metric: took 9.262136ms WaitForService to wait for kubelet.
	I0814 09:48:31.822141  198227 kubeadm.go:547] duration metric: took 7.121002492s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:48:31.822166  198227 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:48:31.824651  198227 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:48:31.824679  198227 node_conditions.go:123] node cpu capacity is 8
	I0814 09:48:31.824696  198227 node_conditions.go:105] duration metric: took 2.52389ms to run NodePressure ...
	I0814 09:48:31.824710  198227 start.go:231] waiting for startup goroutines ...
	I0814 09:48:31.874398  198227 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0814 09:48:31.876033  198227 out.go:177] 
	W0814 09:48:31.876182  198227 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0814 09:48:31.877533  198227 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0814 09:48:31.879774  198227 out.go:177] * Done! kubectl is now configured to use "no-preload-20210814094108-6746" cluster and "default" namespace by default
	I0814 09:48:29.164787  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:31.165193  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:33.664920  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:35.666174  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:38.164723  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:40.166126  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:42.664557  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:45.165400  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:47.165576  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:49.667170  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:52.165979  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:54.664282  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:56.664906  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:59.165340  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:01.664298  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	a8c67ef87cdd1       523cad1a4df73       32 seconds ago       Exited              dashboard-metrics-scraper   1                   c110604bc22ff
	ce735ece4a608       9a07b5b4bfac0       33 seconds ago       Running             kubernetes-dashboard        0                   27de7ceed67ed
	bec3c33484023       6e38f40d628db       37 seconds ago       Exited              storage-provisioner         0                   2a79ace051eeb
	86e16032bb32e       8d147537fb7d1       38 seconds ago       Running             coredns                     0                   1401b1807f665
	c50fd0e548eeb       6de166512aa22       38 seconds ago       Running             kindnet-cni                 0                   974eae1472b45
	d908529b4ea55       ea6b13ed84e03       38 seconds ago       Running             kube-proxy                  0                   4bdaa3b79a93b
	4830e1aecf966       0048118155842       About a minute ago   Running             etcd                        2                   2e19cffd2eb99
	8f34f0d629c73       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager     2                   f0e40e056b3cc
	b75eb53336736       b2462aa94d403       About a minute ago   Running             kube-apiserver              2                   d3a70a813649d
	4a0ace495389a       7da2efaa5b480       About a minute ago   Running             kube-scheduler              2                   85cff652cfd1f
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:43:12 UTC, end at Sat 2021-08-14 09:49:05 UTC. --
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.449209835Z" level=info msg="Finish piping stdout of container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.449213683Z" level=info msg="Finish piping stderr of container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.450758957Z" level=info msg="TaskExit event &TaskExit{ContainerID:86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101,ID:86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101,Pid:4542,ExitStatus:0,ExitedAt:2021-08-14 09:48:33.450542968 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.452933161Z" level=info msg="RemoveContainer for \"70970b102b286814dc4d1f0b4c4950106b289b347b95579efcb8c3a7701db667\" returns successfully"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.497301406Z" level=info msg="shim disconnected" id=86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.497369820Z" level=error msg="copy shim log" error="read /proc/self/fd/113: file already closed"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.499164300Z" level=info msg="StopContainer for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" returns successfully"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.499592294Z" level=info msg="StopPodSandbox for \"cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.499655821Z" level=info msg="Container to stop \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.577817220Z" level=info msg="TaskExit event &TaskExit{ContainerID:cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658,ID:cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658,Pid:4315,ExitStatus:137,ExitedAt:2021-08-14 09:48:33.577650676 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.613221535Z" level=info msg="shim disconnected" id=cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.613289283Z" level=error msg="copy shim log" error="read /proc/self/fd/75: file already closed"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.704902098Z" level=info msg="TearDown network for sandbox \"cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658\" successfully"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.704940624Z" level=info msg="StopPodSandbox for \"cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658\" returns successfully"
	Aug 14 09:48:34 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:34.451159326Z" level=info msg="RemoveContainer for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\""
	Aug 14 09:48:34 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:34.457567238Z" level=info msg="RemoveContainer for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" returns successfully"
	Aug 14 09:48:34 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:34.457963928Z" level=error msg="ContainerStatus for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": not found"
	Aug 14 09:48:42 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:42.227528816Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:48:42 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:42.298678190Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" host=fake.domain
	Aug 14 09:48:42 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:42.299958157Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.941207100Z" level=info msg="Finish piping stderr of container \"bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392\""
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.941248210Z" level=info msg="Finish piping stdout of container \"bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392\""
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.942461388Z" level=info msg="TaskExit event &TaskExit{ContainerID:bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392,ID:bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392,Pid:4680,ExitStatus:255,ExitedAt:2021-08-14 09:48:57.94223823 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.985428147Z" level=info msg="shim disconnected" id=bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.985507073Z" level=error msg="copy shim log" error="read /proc/self/fd/127: file already closed"
	
	* 
	* ==> coredns [86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +0.003974] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +2.011861] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +4.095709] net_ratelimit: 1 callbacks suppressed
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +0.000018] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000038] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[Aug14 09:46] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[Aug14 09:48] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3dde905c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 96 ee 68 e6 84 31 08 06        ........h..1..
	[  +0.032259] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha730867e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 b0 2c 69 36 56 08 06        ........,i6V..
	[  +0.715640] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth2cf9a783
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 ed 1c 18 61 89 08 06        ..........a...
	[  +0.453803] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethfd647b8c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e c9 5e 1b 0b 08 08 06        ........^.....
	[  +0.238950] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth66c80aa5
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 9d a2 94 49 09 08 06        ......B...I...
	
	* 
	* ==> etcd [4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47] <==
	* {"level":"info","ts":"2021-08-14T09:48:05.519Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:no-preload-20210814094108-6746 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.109Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-08-14T09:48:06.109Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  09:49:17 up  1:31,  0 users,  load average: 0.56, 1.54, 1.76
	Linux no-preload-20210814094108-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058] <==
	* I0814 09:48:17.225505       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 09:48:24.733906       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0814 09:48:24.803139       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0814 09:48:28.803878       1 handler_proxy.go:104] no RequestInfo found in the context
	E0814 09:48:28.803950       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0814 09:48:28.803965       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0814 09:48:57.921958       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0814 09:48:57.922052       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:48:57.923036       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:48:57.924202       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:48:57.925401       1 trace.go:205] Trace[1163957196]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:fc77ea21-05d7-4e07-b158-875e1821e42b,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:48:47.924) (total time: 10000ms):
	Trace[1163957196]: [10.000923151s] [10.000923151s] END
	E0814 09:48:57.932463       1 timeout.go:135] post-timeout activity - time-elapsed: 10.366516ms, GET "/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result: <nil>
	W0814 09:49:07.674548       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	I0814 09:49:17.714485       1 trace.go:205] Trace[1380020264]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:48:47.726) (total time: 29988ms):
	Trace[1380020264]: [29.988151672s] [29.988151672s] END
	E0814 09:49:17.714531       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00c769d40)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0814 09:49:17.714490       1 trace.go:205] Trace[824041464]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (14-Aug-2021 09:49:05.267) (total time: 12447ms):
	Trace[824041464]: [12.447188382s] [12.447188382s] END
	E0814 09:49:17.714561       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc00ca3d2c0)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0814 09:49:17.714791       1 trace.go:205] Trace[246227748]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:a4c0b1ad-0a74-4435-9974-64d6c46cf12e,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:48:47.726) (total time: 29988ms):
	Trace[246227748]: [29.988484532s] [29.988484532s] END
	I0814 09:49:17.715858       1 trace.go:205] Trace[873974150]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:ec523ae5-ffab-4d04-8b29-a2368546bccc,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:49:05.267) (total time: 12448ms):
	Trace[873974150]: [12.44859817s] [12.44859817s] END
	
	* 
	* ==> kube-controller-manager [8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17] <==
	* E0814 09:48:26.005593       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:48:26.005939       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0814 09:48:26.011531       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:48:26.011542       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:48:26.020471       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-rjgmp"
	I0814 09:48:26.209993       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0814 09:48:26.218509       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.224473       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.228643       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0814 09:48:26.304403       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.304652       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:48:26.305423       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.309238       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.309402       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.313321       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.316615       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.316618       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:48:26.323101       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.323153       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.327251       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.327254       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:48:26.412766       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-vrv5k"
	I0814 09:48:26.416415       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-g5rms"
	E0814 09:48:54.103007       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:48:54.514714       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f] <==
	* I0814 09:48:27.037457       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:48:27.037498       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:48:27.037517       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0814 09:48:27.205376       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:48:27.205408       1 server_others.go:212] Using iptables Proxier.
	I0814 09:48:27.205421       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:48:27.205438       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:48:27.205854       1 server.go:649] Version: v1.22.0-rc.0
	I0814 09:48:27.206841       1 config.go:224] Starting endpoint slice config controller
	I0814 09:48:27.206877       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0814 09:48:27.206981       1 config.go:315] Starting service config controller
	I0814 09:48:27.206987       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0814 09:48:27.212096       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210814094108-6746.169b234dc86dbb15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03e023acc51449f, ext:284480115, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210814094108-6746", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"no-preload-20210814094108-6746", UID:"no-preload-20210814094108-6746", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210814094108-6746.169b234dc86dbb15" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0814 09:48:27.307055       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:48:27.307055       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769] <==
	* I0814 09:48:09.020591       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0814 09:48:09.020665       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0814 09:48:09.023015       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:48:09.023085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.023241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.023292       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:48:09.023308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:48:09.023357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.023395       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:48:09.023536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:48:09.023548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:48:09.023579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:48:09.023643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:48:09.023694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:48:09.023738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:48:09.023795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.024171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 09:48:09.850208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:48:09.904384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:48:09.915320       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:48:09.966609       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:48:10.144055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:48:10.201977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:48:10.489524       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0814 09:48:10.620395       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:43:12 UTC, end at Sat 2021-08-14 09:49:17 UTC. --
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.866645    3567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/745b232d-997b-47fd-9540-725527a7c8e0-config-volume\") pod \"745b232d-997b-47fd-9540-725527a7c8e0\" (UID: \"745b232d-997b-47fd-9540-725527a7c8e0\") "
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.866695    3567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd2wv\" (UniqueName: \"kubernetes.io/projected/745b232d-997b-47fd-9540-725527a7c8e0-kube-api-access-jd2wv\") pod \"745b232d-997b-47fd-9540-725527a7c8e0\" (UID: \"745b232d-997b-47fd-9540-725527a7c8e0\") "
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: W0814 09:48:33.866956    3567 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/745b232d-997b-47fd-9540-725527a7c8e0/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.867077    3567 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745b232d-997b-47fd-9540-725527a7c8e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "745b232d-997b-47fd-9540-725527a7c8e0" (UID: "745b232d-997b-47fd-9540-725527a7c8e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.885143    3567 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/745b232d-997b-47fd-9540-725527a7c8e0-kube-api-access-jd2wv" (OuterVolumeSpecName: "kube-api-access-jd2wv") pod "745b232d-997b-47fd-9540-725527a7c8e0" (UID: "745b232d-997b-47fd-9540-725527a7c8e0"). InnerVolumeSpecName "kube-api-access-jd2wv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.916222    3567 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.967306    3567 reconciler.go:319] "Volume detached for volume \"kube-api-access-jd2wv\" (UniqueName: \"kubernetes.io/projected/745b232d-997b-47fd-9540-725527a7c8e0-kube-api-access-jd2wv\") on node \"no-preload-20210814094108-6746\" DevicePath \"\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.967381    3567 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/745b232d-997b-47fd-9540-725527a7c8e0-config-volume\") on node \"no-preload-20210814094108-6746\" DevicePath \"\""
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: W0814 09:48:34.318144    3567 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod805798a2-948f-4d0c-a548-07118d846033/a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea WatchSource:0}: task a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea not found: not found
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.450184    3567 scope.go:110] "RemoveContainer" containerID="86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.451763    3567 scope.go:110] "RemoveContainer" containerID="a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:34.452046    3567 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-vrv5k_kubernetes-dashboard(805798a2-948f-4d0c-a548-07118d846033)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-vrv5k" podUID=805798a2-948f-4d0c-a548-07118d846033
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.457757    3567 scope.go:110] "RemoveContainer" containerID="86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:34.458207    3567 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": not found" containerID="86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.458249    3567 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101} err="failed to get container status \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": rpc error: code = NotFound desc = an error occurred when try to find container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": not found"
	Aug 14 09:48:35 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:35.232259    3567 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=745b232d-997b-47fd-9540-725527a7c8e0 path="/var/lib/kubelet/pods/745b232d-997b-47fd-9540-725527a7c8e0/volumes"
	Aug 14 09:48:35 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:35.454728    3567 scope.go:110] "RemoveContainer" containerID="a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea"
	Aug 14 09:48:35 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:35.454984    3567 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-vrv5k_kubernetes-dashboard(805798a2-948f-4d0c-a548-07118d846033)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-vrv5k" podUID=805798a2-948f-4d0c-a548-07118d846033
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300166    3567 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300210    3567 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300393    3567 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k55k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-rjgmp_kube-system(ca6ddeeb-6afd-4408-8ac8-39df00ec7dea): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300449    3567 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-rjgmp" podUID=ca6ddeeb-6afd-4408-8ac8-39df00ec7dea
	Aug 14 09:48:45 no-preload-20210814094108-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:48:45 no-preload-20210814094108-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:48:45 no-preload-20210814094108-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e] <==
	* 2021/08/14 09:48:32 Using namespace: kubernetes-dashboard
	2021/08/14 09:48:32 Using in-cluster config to connect to apiserver
	2021/08/14 09:48:32 Using secret token for csrf signing
	2021/08/14 09:48:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:48:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:48:32 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/14 09:48:32 Generating JWE encryption key
	2021/08/14 09:48:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:48:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:48:32 Initializing JWE encryption key from synchronized object
	2021/08/14 09:48:32 Creating in-cluster Sidecar client
	2021/08/14 09:48:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:48:32 Serving insecurely on HTTP port: 9090
	2021/08/14 09:48:32 Starting overwatch
	
	* 
	* ==> storage-provisioner [bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 95 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00013d790, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00013d780)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0005a85a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bef00, 0x18e5530, 0xc00058c800, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000591380)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000591380, 0x18b3d60, 0xc000315650, 0x1, 0xc00014a900)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000591380, 0x3b9aca00, 0x0, 0x1, 0xc00014a900)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000591380, 0x3b9aca00, 0xc00014a900)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:49:17.719516  235521 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210814094108-6746
helpers_test.go:236: (dbg) docker inspect no-preload-20210814094108-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3",
	        "Created": "2021-08-14T09:41:10.066772897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:43:11.903170068Z",
	            "FinishedAt": "2021-08-14T09:43:09.639424897Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/hosts",
	        "LogPath": "/var/lib/docker/containers/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3/f79f2f866d42a828f53829ecf686262f290bff0bd277a17f85d67d117ca621c3-json.log",
	        "Name": "/no-preload-20210814094108-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210814094108-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210814094108-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d8bc7d5fb63ec57d96f371d31d30b78c35f4bed300f5c3d09dcb8d8161c86d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210814094108-6746",
	                "Source": "/var/lib/docker/volumes/no-preload-20210814094108-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210814094108-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210814094108-6746",
	                "name.minikube.sigs.k8s.io": "no-preload-20210814094108-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "798b7d8418903154eb1fe4148c8ddb6cb61b065b9fed7dafddccb525405f4682",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32938"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32937"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32934"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32935"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/798b7d841890",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210814094108-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f79f2f866d42"
	                    ],
	                    "NetworkID": "b3ba1e9c1cb05c8c1a4d88161faa9897d77b38de1b24b25543acd0ac824e106d",
	                    "EndpointID": "61b51a9ef6e1441e83a1f1e8d1f9601c5c1b66ee5f74de42a1a60a8bfd02b019",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210814094108-6746 -n no-preload-20210814094108-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210814094108-6746 -n no-preload-20210814094108-6746: exit status 2 (15.756987459s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:49:33.802563  236071 status.go:422] Error apiserver status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210814094108-6746 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210814094108-6746 logs -n 25: exit status 110 (1m0.905778541s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | pause-20210814093545-6746 logs                    | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:02 UTC | Sat, 14 Aug 2021 09:41:03 UTC |
	|         | -n 25                                             |                                     |         |         |                               |                               |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:03 UTC | Sat, 14 Aug 2021 09:41:06 UTC |
	|         | --alsologtostderr -v=5                            |                                     |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:39:02 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                     |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                     |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                     |         |         |                               |                               |
	|         | --keep-context=false                              |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                     |         |         |                               |                               |
	| profile | list --output json                                | minikube                            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:06 UTC | Sat, 14 Aug 2021 09:41:07 UTC |
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:07 UTC | Sat, 14 Aug 2021 09:41:08 UTC |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:16 UTC | Sat, 14 Aug 2021 09:41:17 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:17 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                     |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                     |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                     |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                     |         |         |                               |                               |
	|         | --keep-context=false                              |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                     |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                     |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:18 UTC | Sat, 14 Aug 2021 09:43:19 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:20 UTC | Sat, 14 Aug 2021 09:43:21 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:21 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                     |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                     |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                     |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                     |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                   | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:50 UTC | Sat, 14 Aug 2021 09:44:51 UTC |
	|         | logs -n 25                                        |                                     |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:51 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                     |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210814094325-6746     | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                     |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                     |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:48:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                     |         |         |                               |                               |
	|         | --driver=docker                                   |                                     |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210814094108-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:48:45 UTC | Sat, 14 Aug 2021 09:48:45 UTC |
	|         | no-preload-20210814094108-6746                    |                                     |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                     |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:45:12
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:45:12.676514  219213 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:45:12.676583  219213 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:12.676609  219213 out.go:311] Setting ErrFile to fd 2...
	I0814 09:45:12.676613  219213 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:12.676721  219213 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:45:12.677016  219213 out.go:305] Setting JSON to false
	I0814 09:45:12.712595  219213 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5275,"bootTime":1628929038,"procs":273,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:45:12.712697  219213 start.go:121] virtualization: kvm guest
	I0814 09:45:12.715906  219213 out.go:177] * [embed-certs-20210814094325-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:45:12.717448  219213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:45:12.716056  219213 notify.go:169] Checking for updates...
	I0814 09:45:12.719042  219213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:45:12.720597  219213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:45:12.722183  219213 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:45:12.722638  219213 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:45:12.723036  219213 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:45:12.771441  219213 docker.go:132] docker version: linux-19.03.15
	I0814 09:45:12.771543  219213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:45:12.851100  219213 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:45:12.80695312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:45:12.851195  219213 docker.go:244] overlay module found
	I0814 09:45:12.853255  219213 out.go:177] * Using the docker driver based on existing profile
	I0814 09:45:12.853279  219213 start.go:278] selected driver: docker
	I0814 09:45:12.853284  219213 start.go:751] validating driver "docker" against &{Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:45:12.853368  219213 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:45:12.853401  219213 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:45:12.853419  219213 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:45:12.854792  219213 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:45:12.855582  219213 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:45:12.934182  219213 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:45:12.890723264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0814 09:45:12.934305  219213 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:45:12.934347  219213 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:45:12.936022  219213 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:45:12.936116  219213 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:45:12.936138  219213 cni.go:93] Creating CNI manager for ""
	I0814 09:45:12.936144  219213 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:45:12.936155  219213 start_flags.go:277] config:
	{Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:f
alse ExtraDisks:0}
	I0814 09:45:12.938131  219213 out.go:177] * Starting control plane node embed-certs-20210814094325-6746 in cluster embed-certs-20210814094325-6746
	I0814 09:45:12.938178  219213 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:45:12.939564  219213 out.go:177] * Pulling base image ...
	I0814 09:45:12.939606  219213 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:45:12.939663  219213 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:45:12.939695  219213 cache.go:56] Caching tarball of preloaded images
	I0814 09:45:12.939704  219213 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:45:12.939897  219213 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:45:12.939915  219213 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:45:12.940121  219213 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/config.json ...
	I0814 09:45:13.016192  219213 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:45:13.016219  219213 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:45:13.016239  219213 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:45:13.016281  219213 start.go:313] acquiring machines lock for embed-certs-20210814094325-6746: {Name:mk9d63dfbf0330e30e75ccffedf22e0c93e8bd0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:45:13.016378  219213 start.go:317] acquired machines lock for "embed-certs-20210814094325-6746" in 75.307µs
	I0814 09:45:13.016402  219213 start.go:93] Skipping create...Using existing machine configuration
	I0814 09:45:13.016410  219213 fix.go:55] fixHost starting: 
	I0814 09:45:13.016680  219213 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:45:13.054977  219213 fix.go:108] recreateIfNeeded on embed-certs-20210814094325-6746: state=Stopped err=<nil>
	W0814 09:45:13.055025  219213 fix.go:134] unexpected machine state, will restart: <nil>
	I0814 09:45:10.255829  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:12.256243  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:14.755147  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:13.057291  219213 out.go:177] * Restarting existing docker container for "embed-certs-20210814094325-6746" ...
	I0814 09:45:13.057358  219213 cli_runner.go:115] Run: docker start embed-certs-20210814094325-6746
	I0814 09:45:14.423199  219213 cli_runner.go:168] Completed: docker start embed-certs-20210814094325-6746: (1.365811138s)
	I0814 09:45:14.423277  219213 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:45:14.464030  219213 kic.go:420] container "embed-certs-20210814094325-6746" state is running.
	I0814 09:45:14.464412  219213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:45:14.503527  219213 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/config.json ...
	I0814 09:45:14.503726  219213 machine.go:88] provisioning docker machine ...
	I0814 09:45:14.503761  219213 ubuntu.go:169] provisioning hostname "embed-certs-20210814094325-6746"
	I0814 09:45:14.503808  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:14.540944  219213 main.go:130] libmachine: Using SSH client type: native
	I0814 09:45:14.541187  219213 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0814 09:45:14.541212  219213 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210814094325-6746 && echo "embed-certs-20210814094325-6746" | sudo tee /etc/hostname
	I0814 09:45:14.541692  219213 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34934->127.0.0.1:32948: read: connection reset by peer
	I0814 09:45:16.755965  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:18.756563  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:17.691853  219213 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210814094325-6746
	
	I0814 09:45:17.691924  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:17.730138  219213 main.go:130] libmachine: Using SSH client type: native
	I0814 09:45:17.730291  219213 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0814 09:45:17.730312  219213 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210814094325-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210814094325-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210814094325-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:45:17.851952  219213 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:45:17.851978  219213 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:45:17.851998  219213 ubuntu.go:177] setting up certificates
	I0814 09:45:17.852008  219213 provision.go:83] configureAuth start
	I0814 09:45:17.852050  219213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:45:17.892638  219213 provision.go:138] copyHostCerts
	I0814 09:45:17.892706  219213 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:45:17.892717  219213 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:45:17.892771  219213 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:45:17.892905  219213 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:45:17.892918  219213 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:45:17.892941  219213 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:45:17.893001  219213 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:45:17.893008  219213 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:45:17.893025  219213 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:45:17.893076  219213 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210814094325-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210814094325-6746]
	I0814 09:45:18.127966  219213 provision.go:172] copyRemoteCerts
	I0814 09:45:18.128031  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:45:18.128071  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.166720  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.256528  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:45:18.272768  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0814 09:45:18.287782  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:45:18.302468  219213 provision.go:86] duration metric: configureAuth took 450.451654ms
	I0814 09:45:18.302485  219213 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:45:18.302626  219213 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:45:18.302636  219213 machine.go:91] provisioned docker machine in 3.79889634s
	I0814 09:45:18.302643  219213 start.go:267] post-start starting for "embed-certs-20210814094325-6746" (driver="docker")
	I0814 09:45:18.302648  219213 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:45:18.302680  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:45:18.302723  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.341490  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.431525  219213 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:45:18.433991  219213 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:45:18.434015  219213 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:45:18.434023  219213 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:45:18.434028  219213 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:45:18.434036  219213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:45:18.434095  219213 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:45:18.434172  219213 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:45:18.434268  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:45:18.440281  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:45:18.455342  219213 start.go:270] post-start completed in 152.690241ms
	I0814 09:45:18.455386  219213 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:45:18.455419  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.493843  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.580556  219213 fix.go:57] fixHost completed within 5.564142879s
	I0814 09:45:18.580580  219213 start.go:80] releasing machines lock for "embed-certs-20210814094325-6746", held for 5.564189475s
	I0814 09:45:18.580650  219213 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210814094325-6746
	I0814 09:45:18.618680  219213 ssh_runner.go:149] Run: systemctl --version
	I0814 09:45:18.618720  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.618756  219213 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:45:18.618814  219213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:45:18.660557  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.660558  219213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:45:18.766126  219213 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:45:18.776888  219213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:45:18.785271  219213 docker.go:153] disabling docker service ...
	I0814 09:45:18.785321  219213 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:45:18.793782  219213 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:45:18.801548  219213 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:45:18.860901  219213 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:45:18.914390  219213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:45:18.922515  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:45:18.933706  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:45:18.945694  219213 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:45:18.951209  219213 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:45:18.951261  219213 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:45:18.957754  219213 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:45:18.963238  219213 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:45:19.017015  219213 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:45:19.086303  219213 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:45:19.086363  219213 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:45:19.089913  219213 start.go:413] Will wait 60s for crictl version
	I0814 09:45:19.089968  219213 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:45:19.111768  219213 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:45:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:45:21.255518  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:23.755237  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:25.755360  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:28.255803  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:30.158828  219213 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:45:30.214702  219213 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:45:30.214764  219213 ssh_runner.go:149] Run: containerd --version
	I0814 09:45:30.237176  219213 ssh_runner.go:149] Run: containerd --version
	I0814 09:45:30.263100  219213 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:45:30.263185  219213 cli_runner.go:115] Run: docker network inspect embed-certs-20210814094325-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:45:30.300976  219213 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:45:30.304083  219213 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:45:30.312845  219213 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:45:30.312901  219213 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:45:30.334193  219213 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:45:30.334210  219213 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:45:30.334241  219213 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:45:30.354604  219213 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:45:30.354620  219213 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:45:30.354663  219213 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:45:30.374680  219213 cni.go:93] Creating CNI manager for ""
	I0814 09:45:30.374706  219213 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:45:30.374715  219213 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:45:30.374730  219213 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210814094325-6746 NodeName:embed-certs-20210814094325-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:45:30.374834  219213 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210814094325-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:45:30.374912  219213 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210814094325-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:45:30.374963  219213 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:45:30.381148  219213 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:45:30.381205  219213 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:45:30.387016  219213 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0814 09:45:30.398193  219213 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:45:30.409326  219213 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I0814 09:45:30.420343  219213 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:45:30.422902  219213 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:45:30.430717  219213 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746 for IP: 192.168.58.2
	I0814 09:45:30.430760  219213 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:45:30.430776  219213 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:45:30.430824  219213 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/client.key
	I0814 09:45:30.430848  219213 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key.cee25041
	I0814 09:45:30.430866  219213 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key
	I0814 09:45:30.430981  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:45:30.431030  219213 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:45:30.431046  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:45:30.431076  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:45:30.431109  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:45:30.431139  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:45:30.431203  219213 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:45:30.432504  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:45:30.447364  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:45:30.462451  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:45:30.477633  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/embed-certs-20210814094325-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:45:30.492751  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:45:30.507832  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:45:30.522671  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:45:30.537713  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:45:30.552860  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:45:30.568110  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:45:30.582869  219213 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:45:30.597583  219213 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:45:30.608341  219213 ssh_runner.go:149] Run: openssl version
	I0814 09:45:30.612731  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:45:30.619316  219213 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:45:30.622063  219213 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:45:30.622099  219213 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:45:30.626527  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:45:30.632449  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:45:30.638987  219213 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:45:30.641736  219213 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:45:30.641768  219213 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:45:30.645950  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:45:30.651801  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:45:30.658203  219213 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:45:30.660858  219213 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:45:30.660898  219213 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:45:30.665211  219213 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:45:30.671014  219213 kubeadm.go:390] StartCluster: {Name:embed-certs-20210814094325-6746 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210814094325-6746 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:45:30.671088  219213 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:45:30.671126  219213 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:45:30.692470  219213 cri.go:76] found id: "55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5"
	I0814 09:45:30.692491  219213 cri.go:76] found id: "e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495"
	I0814 09:45:30.692497  219213 cri.go:76] found id: "3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9"
	I0814 09:45:30.692503  219213 cri.go:76] found id: "1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138"
	I0814 09:45:30.692510  219213 cri.go:76] found id: "54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae"
	I0814 09:45:30.692516  219213 cri.go:76] found id: "7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147"
	I0814 09:45:30.692521  219213 cri.go:76] found id: "9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38"
	I0814 09:45:30.692532  219213 cri.go:76] found id: "68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3"
	I0814 09:45:30.692537  219213 cri.go:76] found id: ""
	I0814 09:45:30.692568  219213 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:45:30.704880  219213 cri.go:103] JSON = null
	W0814 09:45:30.704915  219213 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0814 09:45:30.704947  219213 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:45:30.710859  219213 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0814 09:45:30.710880  219213 kubeadm.go:600] restartCluster start
	I0814 09:45:30.710914  219213 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0814 09:45:30.716486  219213 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:30.717211  219213 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210814094325-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:45:30.717396  219213 kubeconfig.go:128] "embed-certs-20210814094325-6746" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig - will repair!
	I0814 09:45:30.717822  219213 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:45:30.720063  219213 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:45:30.725765  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:30.725822  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:30.737276  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:30.937645  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:30.937706  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:30.951639  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.137902  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.137967  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.151090  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.338360  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.338448  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.352317  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.537501  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.537575  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.551094  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.738330  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.738396  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.751356  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:31.937610  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:31.937675  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:31.951045  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.138314  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.138381  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.153876  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.338137  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.338200  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.351552  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.537809  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.537865  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.550664  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:30.256668  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:32.763916  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:32.738135  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.738215  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.751149  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:32.938390  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:32.938455  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:32.951950  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.138244  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.138334  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.151217  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.337437  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.337501  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.350794  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.537891  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.537956  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.551203  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.737426  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.737490  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.750428  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.750448  219213 api_server.go:164] Checking apiserver status ...
	I0814 09:45:33.750486  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:45:33.763161  219213 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.763181  219213 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0814 09:45:33.763188  219213 kubeadm.go:1032] stopping kube-system containers ...
	I0814 09:45:33.763199  219213 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:45:33.763244  219213 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:45:33.822662  219213 cri.go:76] found id: "55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5"
	I0814 09:45:33.822682  219213 cri.go:76] found id: "e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495"
	I0814 09:45:33.822688  219213 cri.go:76] found id: "3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9"
	I0814 09:45:33.822692  219213 cri.go:76] found id: "1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138"
	I0814 09:45:33.822696  219213 cri.go:76] found id: "54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae"
	I0814 09:45:33.822700  219213 cri.go:76] found id: "7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147"
	I0814 09:45:33.822704  219213 cri.go:76] found id: "9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38"
	I0814 09:45:33.822708  219213 cri.go:76] found id: "68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3"
	I0814 09:45:33.822713  219213 cri.go:76] found id: ""
	I0814 09:45:33.822718  219213 cri.go:221] Stopping containers: [55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5 e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495 3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9 1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138 54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae 7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147 9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38 68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3]
	I0814 09:45:33.822767  219213 ssh_runner.go:149] Run: which crictl
	I0814 09:45:33.825398  219213 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 55e329998ae50505ebcb19a2c269a3d52f29c3a4b92650c453f744d8e78676e5 e64c8214bad9961547fdc2c119ee6d3eb2e75c3d82eb02d262c63f4dd85eb495 3ce66361610934db9cf36944cfd0e8f53dbc266b43e42ef57910148733295bf9 1df996662fcb6f8a0ba47d41dba78e874ab812cefe956db37780beb417bd8138 54d8fd8493e3afe489bc4db877f543ff95ff1990bf178292bff6939dee011cae 7239f0ed5afe463a385c419878ce3b7e90a59f2c5406c72a07c12c7e31296147 9ffce1306e10c245a2ebf3b58eaf890cad715c90033568b4ee42728214971b38 68fd3b9c805a00a862bd87de0ea0e9f44d55c5f3514dde8c82525124dcc93fa3
	I0814 09:45:33.847462  219213 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0814 09:45:33.856441  219213 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:45:33.862510  219213 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 14 09:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 14 09:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug 14 09:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 14 09:43 /etc/kubernetes/scheduler.conf
	
	I0814 09:45:33.862560  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 09:45:33.868678  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 09:45:33.874650  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 09:45:33.880408  219213 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.880451  219213 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:45:33.885968  219213 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 09:45:33.891591  219213 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:45:33.891625  219213 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:45:33.897228  219213 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:45:33.903147  219213 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0814 09:45:33.903164  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:33.957489  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.641477  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.769485  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.840519  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:34.907080  219213 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:45:34.907146  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:35.420191  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:35.920312  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:36.420275  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:36.920198  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:37.420399  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:35.255536  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:37.256069  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:39.755325  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:37.920245  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:38.419906  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:38.920456  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:39.419901  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:39.919881  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:40.419605  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:40.920452  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:41.420411  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:45:41.503885  219213 api_server.go:70] duration metric: took 6.596804894s to wait for apiserver process to appear ...
	I0814 09:45:41.503915  219213 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:45:41.503927  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:41.504322  219213 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0814 09:45:42.005044  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:41.755488  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:43.756277  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:45.704410  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:45:45.704447  219213 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:45:46.004588  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:46.008955  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:45:46.008981  219213 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:45:46.504491  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:46.510082  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:45:46.510106  219213 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:45:47.004597  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:45:47.009929  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:45:47.017789  219213 api_server.go:139] control plane version: v1.21.3
	I0814 09:45:47.017813  219213 api_server.go:129] duration metric: took 5.513890902s to wait for apiserver health ...
	I0814 09:45:47.017825  219213 cni.go:93] Creating CNI manager for ""
	I0814 09:45:47.017833  219213 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:45:47.020350  219213 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:45:47.020399  219213 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:45:47.024074  219213 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:45:47.024091  219213 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:45:47.036846  219213 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:45:47.527366  219213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:45:47.537910  219213 system_pods.go:59] 9 kube-system pods found
	I0814 09:45:47.537945  219213 system_pods.go:61] "coredns-558bd4d5db-r9f9m" [a95b5bd5-9099-4c69-a77e-d319f3db017f] Running
	I0814 09:45:47.537954  219213 system_pods.go:61] "etcd-embed-certs-20210814094325-6746" [8a290f3e-9865-416a-a8b5-8185ce927699] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 09:45:47.537959  219213 system_pods.go:61] "kindnet-mmp5r" [77fdb837-eeb8-412b-a20e-ce5d6d198691] Running
	I0814 09:45:47.537963  219213 system_pods.go:61] "kube-apiserver-embed-certs-20210814094325-6746" [662d7fb3-b141-4d8e-a122-f58805e6b74a] Running
	I0814 09:45:47.537967  219213 system_pods.go:61] "kube-controller-manager-embed-certs-20210814094325-6746" [d5e10fb1-ef80-41f7-a5c6-8fb7ea20d7d4] Running
	I0814 09:45:47.537971  219213 system_pods.go:61] "kube-proxy-mgvn2" [2d2198aa-7650-47ff-81cc-7b3a13d11ac6] Running
	I0814 09:45:47.537976  219213 system_pods.go:61] "kube-scheduler-embed-certs-20210814094325-6746" [3a19f542-979a-4726-b47d-bebbfa29cfac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:45:47.537983  219213 system_pods.go:61] "metrics-server-7c784ccb57-57jxt" [69805726-3356-4374-ba02-ddee9ab9f4d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:45:47.538015  219213 system_pods.go:61] "storage-provisioner" [38121728-bc1a-4972-b44b-f156a068aea0] Running
	I0814 09:45:47.538020  219213 system_pods.go:74] duration metric: took 10.633349ms to wait for pod list to return data ...
	I0814 09:45:47.538026  219213 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:45:47.541194  219213 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:45:47.541218  219213 node_conditions.go:123] node cpu capacity is 8
	I0814 09:45:47.541229  219213 node_conditions.go:105] duration metric: took 3.196426ms to run NodePressure ...
	I0814 09:45:47.541243  219213 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:45:46.255616  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:48.256248  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:48.104081  219213 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0814 09:45:48.108233  219213 kubeadm.go:746] kubelet initialised
	I0814 09:45:48.108257  219213 kubeadm.go:747] duration metric: took 4.148526ms waiting for restarted kubelet to initialise ...
	I0814 09:45:48.108265  219213 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:45:48.112670  219213 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:48.123242  219213 pod_ready.go:92] pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:48.123261  219213 pod_ready.go:81] duration metric: took 10.570463ms waiting for pod "coredns-558bd4d5db-r9f9m" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:48.123269  219213 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:50.133040  219213 pod_ready.go:102] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:52.631806  219213 pod_ready.go:102] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:50.755599  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:52.755700  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:53.131092  219213 pod_ready.go:92] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:53.131121  219213 pod_ready.go:81] duration metric: took 5.00784485s waiting for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:53.131136  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:55.141124  219213 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:55.256189  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:57.755425  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:59.758287  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:57.640595  219213 pod_ready.go:102] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:45:59.140433  219213 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:59.140459  219213 pod_ready.go:81] duration metric: took 6.00931561s waiting for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.140470  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.144444  219213 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:59.144461  219213 pod_ready.go:81] duration metric: took 3.983627ms waiting for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.144472  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mgvn2" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.148266  219213 pod_ready.go:92] pod "kube-proxy-mgvn2" in "kube-system" namespace has status "Ready":"True"
	I0814 09:45:59.148282  219213 pod_ready.go:81] duration metric: took 3.80255ms waiting for pod "kube-proxy-mgvn2" in "kube-system" namespace to be "Ready" ...
	I0814 09:45:59.148292  219213 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:46:00.656204  219213 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:46:00.656230  219213 pod_ready.go:81] duration metric: took 1.507930198s waiting for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:46:00.656240  219213 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace to be "Ready" ...
	I0814 09:46:02.255586  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:04.255652  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:02.667460  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:05.164740  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:06.756092  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:08.757713  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:07.667796  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:10.164641  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:12.166888  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:11.256110  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:13.755045  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:14.665465  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:17.165469  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:15.755867  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:17.757681  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:19.664971  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:21.665400  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:20.255902  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:22.756302  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:24.165735  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:26.665281  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:25.256153  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:27.756182  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:29.164962  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:31.165841  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:30.255896  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:32.756408  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:33.165897  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:35.705688  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:35.256011  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:37.756706  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:38.165458  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:40.665526  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:40.255925  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:42.755119  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:44.756095  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:42.665761  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:45.164449  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:47.164767  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:47.255503  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:49.755534  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:49.166034  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:51.667107  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:52.255090  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:54.256100  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:54.165651  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:56.665545  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:56.755593  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:58.756359  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:46:59.165034  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:01.665058  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:01.255956  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:03.755700  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:03.665786  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:05.665871  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:05.756151  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:07.757044  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:08.165326  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:10.664757  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:10.255756  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:12.256098  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:14.755338  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:12.665136  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:15.165593  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:17.255407  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:19.255863  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:17.665131  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:20.165233  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:21.755368  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:24.255557  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:22.665620  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:24.665692  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:27.165245  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:26.755503  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:28.756323  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:29.165429  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:31.165564  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:31.255941  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:33.755230  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:33.664908  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:35.664963  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:35.755489  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:38.255591  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:37.665745  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:40.165232  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:42.169685  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:40.755707  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:42.757519  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:44.665645  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:47.164059  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:45.255953  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:47.756058  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:49.756177  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:49.165007  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:51.165044  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:52.254962  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:54.256013  198227 pod_ready.go:102] pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:54.751367  198227 pod_ready.go:81] duration metric: took 4m0.004529083s waiting for pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace to be "Ready" ...
	E0814 09:47:54.751388  198227 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-lsbk6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 09:47:54.751409  198227 pod_ready.go:38] duration metric: took 4m10.618186425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:47:54.751435  198227 kubeadm.go:604] restartCluster took 4m26.295245598s
	W0814 09:47:54.751568  198227 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0814 09:47:54.751599  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0814 09:47:53.165474  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:55.165815  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:47:57.948472  198227 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.196842047s)
	I0814 09:47:57.948535  198227 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0814 09:47:57.958148  198227 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:47:57.958208  198227 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:47:57.980246  198227 cri.go:76] found id: ""
	I0814 09:47:57.980305  198227 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:47:57.986736  198227 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:47:57.986783  198227 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:47:57.992761  198227 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:47:57.992809  198227 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:47:58.246319  198227 out.go:204]   - Generating certificates and keys ...
	I0814 09:47:58.869652  198227 out.go:204]   - Booting up control plane ...
	I0814 09:47:57.667421  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:00.165338  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:02.665332  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:05.165237  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:11.920868  198227 out.go:204]   - Configuring RBAC rules ...
	I0814 09:48:12.333322  198227 cni.go:93] Creating CNI manager for ""
	I0814 09:48:12.333346  198227 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:48:07.665749  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:10.165312  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:12.334944  198227 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:48:12.335010  198227 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:48:12.338427  198227 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0814 09:48:12.338447  198227 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:48:12.350499  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:48:12.492357  198227 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:48:12.492429  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:12.492429  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=no-preload-20210814094108-6746 minikube.k8s.io/updated_at=2021_08_14T09_48_12_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:12.507107  198227 ops.go:34] apiserver oom_adj: -16
	I0814 09:48:12.552786  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:13.132225  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:13.633067  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:14.132170  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:14.632217  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:15.132235  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:12.665519  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:15.164821  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:17.166787  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:15.632749  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:16.132739  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:16.632122  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:17.132912  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:17.632125  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:18.132897  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:18.632617  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:19.132575  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:19.632602  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:20.132757  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:19.665114  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:22.164949  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:20.632671  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:21.132469  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:21.632253  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:22.132187  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:22.632401  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:23.132901  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:23.633122  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:24.132310  198227 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:48:24.185162  198227 kubeadm.go:985] duration metric: took 11.692791811s to wait for elevateKubeSystemPrivileges.
	I0814 09:48:24.185192  198227 kubeadm.go:392] StartCluster complete in 4m55.788730202s
	I0814 09:48:24.185214  198227 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:48:24.185304  198227 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:48:24.186142  198227 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:48:24.701063  198227 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210814094108-6746" rescaled to 1
	I0814 09:48:24.701114  198227 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0814 09:48:24.702623  198227 out.go:177] * Verifying Kubernetes components...
	I0814 09:48:24.702671  198227 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:48:24.701153  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:48:24.701174  198227 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0814 09:48:24.702782  198227 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702795  198227 addons.go:59] Setting metrics-server=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702805  198227 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210814094108-6746"
	I0814 09:48:24.702804  198227 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702826  198227 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210814094108-6746"
	I0814 09:48:24.702785  198227 addons.go:59] Setting dashboard=true in profile "no-preload-20210814094108-6746"
	I0814 09:48:24.702863  198227 addons.go:135] Setting addon dashboard=true in "no-preload-20210814094108-6746"
	W0814 09:48:24.702878  198227 addons.go:147] addon dashboard should already be in state true
	I0814 09:48:24.702806  198227 addons.go:135] Setting addon metrics-server=true in "no-preload-20210814094108-6746"
	I0814 09:48:24.702910  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	W0814 09:48:24.702918  198227 addons.go:147] addon metrics-server should already be in state true
	I0814 09:48:24.702951  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:24.701371  198227 config.go:177] Loaded profile config "no-preload-20210814094108-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	W0814 09:48:24.702812  198227 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:48:24.703062  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:24.703172  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.703404  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.703453  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.703512  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.714164  198227 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210814094108-6746" to be "Ready" ...
	I0814 09:48:24.716834  198227 node_ready.go:49] node "no-preload-20210814094108-6746" has status "Ready":"True"
	I0814 09:48:24.716855  198227 node_ready.go:38] duration metric: took 2.654773ms waiting for node "no-preload-20210814094108-6746" to be "Ready" ...
	I0814 09:48:24.716870  198227 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:48:24.721408  198227 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:24.726246  198227 pod_ready.go:92] pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:24.726265  198227 pod_ready.go:81] duration metric: took 4.82012ms waiting for pod "etcd-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:24.726277  198227 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:24.766130  198227 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0814 09:48:24.767653  198227 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0814 09:48:24.767719  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0814 09:48:24.767732  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0814 09:48:24.767787  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.771481  198227 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210814094108-6746"
	W0814 09:48:24.771507  198227 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:48:24.771534  198227 host.go:66] Checking if "no-preload-20210814094108-6746" exists ...
	I0814 09:48:24.772197  198227 cli_runner.go:115] Run: docker container inspect no-preload-20210814094108-6746 --format={{.State.Status}}
	I0814 09:48:24.774269  198227 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0814 09:48:24.774359  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 09:48:24.774376  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0814 09:48:24.774439  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.779524  198227 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:48:24.779669  198227 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:48:24.779682  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:48:24.779744  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.805000  198227 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:48:24.828072  198227 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:48:24.828096  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:48:24.828161  198227 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210814094108-6746
	I0814 09:48:24.832721  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:24.847396  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:24.859514  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:24.876349  198227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/no-preload-20210814094108-6746/id_rsa Username:docker}
	I0814 09:48:25.114534  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:48:25.114585  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:48:25.115333  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0814 09:48:25.115352  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0814 09:48:25.129656  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0814 09:48:25.129681  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0814 09:48:25.202169  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0814 09:48:25.202191  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0814 09:48:25.215767  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 09:48:25.215790  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0814 09:48:25.218133  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0814 09:48:25.218155  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0814 09:48:25.232121  198227 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0814 09:48:25.312880  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 09:48:25.312905  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0814 09:48:25.321275  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0814 09:48:25.321299  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0814 09:48:25.330492  198227 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:48:25.330518  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0814 09:48:25.336826  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0814 09:48:25.336849  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0814 09:48:25.408475  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:48:25.414709  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0814 09:48:25.414733  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0814 09:48:25.435482  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0814 09:48:25.435509  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0814 09:48:25.504284  198227 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:48:25.504309  198227 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0814 09:48:25.516637  198227 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:48:26.201697  198227 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210814094108-6746"
	I0814 09:48:26.617400  198227 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.100703883s)
	I0814 09:48:24.664544  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:26.665266  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:26.619426  198227 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0814 09:48:26.619465  198227 addons.go:344] enableAddons completed in 1.91829394s
	I0814 09:48:26.807531  198227 pod_ready.go:102] pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:29.236143  198227 pod_ready.go:102] pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:30.236508  198227 pod_ready.go:92] pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:30.236535  198227 pod_ready.go:81] duration metric: took 5.510248695s waiting for pod "kube-apiserver-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.236549  198227 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.240532  198227 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:30.240551  198227 pod_ready.go:81] duration metric: took 3.988507ms waiting for pod "kube-controller-manager-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.240564  198227 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.243995  198227 pod_ready.go:92] pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:48:30.244008  198227 pod_ready.go:81] duration metric: took 3.436745ms waiting for pod "kube-scheduler-no-preload-20210814094108-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:48:30.244015  198227 pod_ready.go:38] duration metric: took 5.527130406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:48:30.244030  198227 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:48:30.244064  198227 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:48:30.266473  198227 api_server.go:70] duration metric: took 5.565330794s to wait for apiserver process to appear ...
	I0814 09:48:30.266497  198227 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:48:30.266508  198227 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 09:48:30.270876  198227 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0814 09:48:30.271613  198227 api_server.go:139] control plane version: v1.22.0-rc.0
	I0814 09:48:30.271629  198227 api_server.go:129] duration metric: took 5.127102ms to wait for apiserver health ...
	I0814 09:48:30.271637  198227 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:48:30.276088  198227 system_pods.go:59] 10 kube-system pods found
	I0814 09:48:30.276114  198227 system_pods.go:61] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.276121  198227 system_pods.go:61] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.276126  198227 system_pods.go:61] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.276132  198227 system_pods.go:61] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0814 09:48:30.276139  198227 system_pods.go:61] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.276145  198227 system_pods.go:61] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.276152  198227 system_pods.go:61] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.276160  198227 system_pods.go:61] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.276165  198227 system_pods.go:61] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.276173  198227 system_pods.go:61] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.276182  198227 system_pods.go:74] duration metric: took 4.540474ms to wait for pod list to return data ...
	I0814 09:48:30.276192  198227 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:48:30.278320  198227 default_sa.go:45] found service account: "default"
	I0814 09:48:30.278337  198227 default_sa.go:55] duration metric: took 2.139851ms for default service account to be created ...
	I0814 09:48:30.278343  198227 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:48:30.283291  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:30.283316  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.283323  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.283331  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.283337  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0814 09:48:30.283355  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.283360  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.283366  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.283373  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.283380  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.283388  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.283406  198227 retry.go:31] will retry after 305.063636ms: missing components: kube-dns, kube-proxy
	I0814 09:48:30.607116  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:30.607150  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.607159  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.607167  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.607181  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0814 09:48:30.607192  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.607207  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.607220  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.607231  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.607245  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.607257  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.607278  198227 retry.go:31] will retry after 338.212508ms: missing components: kube-dns, kube-proxy
	I0814 09:48:30.951806  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:30.951844  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.951855  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:30.951861  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:30.951867  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Running
	I0814 09:48:30.951871  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:30.951877  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:30.951883  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 09:48:30.951892  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:30.951898  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:30.951905  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:30.951920  198227 retry.go:31] will retry after 378.459802ms: missing components: kube-dns, kube-proxy
	I0814 09:48:31.336914  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:31.336951  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.336961  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.336970  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:31.336978  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Running
	I0814 09:48:31.336987  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:31.336998  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:31.337008  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Running
	I0814 09:48:31.337018  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:31.337029  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:31.337038  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:31.337063  198227 retry.go:31] will retry after 469.882201ms: missing components: kube-dns
	I0814 09:48:31.812707  198227 system_pods.go:86] 10 kube-system pods found
	I0814 09:48:31.812738  198227 system_pods.go:89] "coredns-78fcd69978-29ft7" [c53fbcf8-32d6-42e1-82e3-5b7be35a6ad4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.812748  198227 system_pods.go:89] "coredns-78fcd69978-jl7mh" [745b232d-997b-47fd-9540-725527a7c8e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 09:48:31.812753  198227 system_pods.go:89] "etcd-no-preload-20210814094108-6746" [a10d12ce-4a86-4eac-9b68-31d31426f38b] Running
	I0814 09:48:31.812758  198227 system_pods.go:89] "kindnet-vtqtr" [61de2c32-adcf-43c9-9f57-84213c6a9ff2] Running
	I0814 09:48:31.812762  198227 system_pods.go:89] "kube-apiserver-no-preload-20210814094108-6746" [74ec46d6-3bc0-439a-b117-dbe91cff818e] Running
	I0814 09:48:31.812769  198227 system_pods.go:89] "kube-controller-manager-no-preload-20210814094108-6746" [4d71174f-e42f-4e6e-bf80-0f79a71141b2] Running
	I0814 09:48:31.812777  198227 system_pods.go:89] "kube-proxy-wjwsl" [101b3998-93d5-4c75-b83c-09c983f2f62a] Running
	I0814 09:48:31.812784  198227 system_pods.go:89] "kube-scheduler-no-preload-20210814094108-6746" [3dae14b9-8cc7-446c-bfc2-0cad2bed677f] Running
	I0814 09:48:31.812819  198227 system_pods.go:89] "metrics-server-7c784ccb57-rjgmp" [ca6ddeeb-6afd-4408-8ac8-39df00ec7dea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:48:31.812832  198227 system_pods.go:89] "storage-provisioner" [58508b3f-6c10-488b-b616-44a1cb8dfed8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 09:48:31.812840  198227 system_pods.go:126] duration metric: took 1.534492166s to wait for k8s-apps to be running ...
	I0814 09:48:31.812850  198227 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:48:31.812891  198227 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:48:31.822119  198227 system_svc.go:56] duration metric: took 9.262136ms WaitForService to wait for kubelet.
	I0814 09:48:31.822141  198227 kubeadm.go:547] duration metric: took 7.121002492s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:48:31.822166  198227 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:48:31.824651  198227 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:48:31.824679  198227 node_conditions.go:123] node cpu capacity is 8
	I0814 09:48:31.824696  198227 node_conditions.go:105] duration metric: took 2.52389ms to run NodePressure ...
	I0814 09:48:31.824710  198227 start.go:231] waiting for startup goroutines ...
	I0814 09:48:31.874398  198227 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0814 09:48:31.876033  198227 out.go:177] 
	W0814 09:48:31.876182  198227 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0814 09:48:31.877533  198227 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0814 09:48:31.879774  198227 out.go:177] * Done! kubectl is now configured to use "no-preload-20210814094108-6746" cluster and "default" namespace by default
	I0814 09:48:29.164787  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:31.165193  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:33.664920  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:35.666174  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:38.164723  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:40.166126  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:42.664557  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:45.165400  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:47.165576  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:49.667170  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:52.165979  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:54.664282  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:56.664906  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:48:59.165340  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:01.664298  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:03.664634  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:05.664878  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:08.164757  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:10.664363  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:12.665024  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:14.665777  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:17.164779  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:19.165446  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:21.165539  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:23.166613  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:25.665760  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:28.165417  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	I0814 09:49:30.166072  219213 pod_ready.go:102] pod "metrics-server-7c784ccb57-57jxt" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	a8c67ef87cdd1       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   1                   c110604bc22ff
	ce735ece4a608       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   27de7ceed67ed
	bec3c33484023       6e38f40d628db       About a minute ago   Exited              storage-provisioner         0                   2a79ace051eeb
	86e16032bb32e       8d147537fb7d1       About a minute ago   Running             coredns                     0                   1401b1807f665
	c50fd0e548eeb       6de166512aa22       About a minute ago   Running             kindnet-cni                 0                   974eae1472b45
	d908529b4ea55       ea6b13ed84e03       About a minute ago   Running             kube-proxy                  0                   4bdaa3b79a93b
	4830e1aecf966       0048118155842       About a minute ago   Running             etcd                        2                   2e19cffd2eb99
	8f34f0d629c73       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager     2                   f0e40e056b3cc
	b75eb53336736       b2462aa94d403       About a minute ago   Running             kube-apiserver              2                   d3a70a813649d
	4a0ace495389a       7da2efaa5b480       About a minute ago   Running             kube-scheduler              2                   85cff652cfd1f
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:43:12 UTC, end at Sat 2021-08-14 09:49:34 UTC. --
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.449209835Z" level=info msg="Finish piping stdout of container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.449213683Z" level=info msg="Finish piping stderr of container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.450758957Z" level=info msg="TaskExit event &TaskExit{ContainerID:86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101,ID:86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101,Pid:4542,ExitStatus:0,ExitedAt:2021-08-14 09:48:33.450542968 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.452933161Z" level=info msg="RemoveContainer for \"70970b102b286814dc4d1f0b4c4950106b289b347b95579efcb8c3a7701db667\" returns successfully"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.497301406Z" level=info msg="shim disconnected" id=86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.497369820Z" level=error msg="copy shim log" error="read /proc/self/fd/113: file already closed"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.499164300Z" level=info msg="StopContainer for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" returns successfully"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.499592294Z" level=info msg="StopPodSandbox for \"cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.499655821Z" level=info msg="Container to stop \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.577817220Z" level=info msg="TaskExit event &TaskExit{ContainerID:cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658,ID:cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658,Pid:4315,ExitStatus:137,ExitedAt:2021-08-14 09:48:33.577650676 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.613221535Z" level=info msg="shim disconnected" id=cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.613289283Z" level=error msg="copy shim log" error="read /proc/self/fd/75: file already closed"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.704902098Z" level=info msg="TearDown network for sandbox \"cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658\" successfully"
	Aug 14 09:48:33 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:33.704940624Z" level=info msg="StopPodSandbox for \"cf4990ee951f545505546fd0b83cbc61026ea4a7beb417fbdedf1142b3873658\" returns successfully"
	Aug 14 09:48:34 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:34.451159326Z" level=info msg="RemoveContainer for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\""
	Aug 14 09:48:34 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:34.457567238Z" level=info msg="RemoveContainer for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" returns successfully"
	Aug 14 09:48:34 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:34.457963928Z" level=error msg="ContainerStatus for \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": not found"
	Aug 14 09:48:42 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:42.227528816Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:48:42 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:42.298678190Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" host=fake.domain
	Aug 14 09:48:42 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:42.299958157Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.941207100Z" level=info msg="Finish piping stderr of container \"bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392\""
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.941248210Z" level=info msg="Finish piping stdout of container \"bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392\""
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.942461388Z" level=info msg="TaskExit event &TaskExit{ContainerID:bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392,ID:bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392,Pid:4680,ExitStatus:255,ExitedAt:2021-08-14 09:48:57.94223823 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.985428147Z" level=info msg="shim disconnected" id=bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392
	Aug 14 09:48:57 no-preload-20210814094108-6746 containerd[336]: time="2021-08-14T09:48:57.985507073Z" level=error msg="copy shim log" error="read /proc/self/fd/127: file already closed"
	
	* 
	* ==> coredns [86e16032bb32eff3d167cec13c6d1744bb5b22e90f5b251c926e32a216f7e992] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.003974] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +2.011861] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +4.095709] net_ratelimit: 1 callbacks suppressed
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +0.000018] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000038] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[Aug14 09:46] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[Aug14 09:48] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3dde905c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 96 ee 68 e6 84 31 08 06        ........h..1..
	[  +0.032259] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha730867e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 b0 2c 69 36 56 08 06        ........,i6V..
	[  +0.715640] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth2cf9a783
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 ed 1c 18 61 89 08 06        ..........a...
	[  +0.453803] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethfd647b8c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e c9 5e 1b 0b 08 08 06        ........^.....
	[  +0.238950] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth66c80aa5
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 9d a2 94 49 09 08 06        ......B...I...
	[Aug14 09:50] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth219d8885
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 72 ae 3d be 32 47 08 06        ......r.=.2G..
	
	* 
	* ==> etcd [4830e1aecf96600170b36d628edb0c59a87963c56cc8d0ba1095a62d7d639d47] <==
	* {"level":"info","ts":"2021-08-14T09:48:05.519Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-08-14T09:48:05.520Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:no-preload-20210814094108-6746 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:48:06.107Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.108Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:48:06.109Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-08-14T09:48:06.109Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  09:50:34 up  1:33,  0 users,  load average: 0.99, 1.45, 1.71
	Linux no-preload-20210814094108-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [b75eb5333673655496b4bc715e9efed0129c0d280ac1ebf5e5318a599b2b6058] <==
	* E0814 09:50:30.433126       1 timeout.go:135] post-timeout activity - time-elapsed: 7.914807ms, GET "/api/v1/namespaces/default" result: <nil>
	W0814 09:50:30.496642       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:30.742238       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:30.812118       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:30.834188       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.030504       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.064994       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.164496       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.250877       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.293203       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.412381       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.524398       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.639576       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.689574       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:31.696890       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:50:32.070320       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0814 09:50:34.389578       1 trace.go:205] Trace[1350953094]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (14-Aug-2021 09:49:34.389) (total time: 60000ms):
	Trace[1350953094]: [1m0.000279781s] [1m0.000279781s] END
	E0814 09:50:34.389609       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:50:34.389664       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:50:34.391044       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:50:34.392988       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:50:34.394140       1 trace.go:205] Trace[734514988]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:ce542deb-3d80-4dc5-85e7-037b0ceba233,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:49:34.389) (total time: 60004ms):
	Trace[734514988]: [1m0.004869756s] [1m0.004869756s] END
	E0814 09:50:34.396490       1 timeout.go:135] post-timeout activity - time-elapsed: 6.795222ms, GET "/api/v1/nodes" result: <nil>
	
	* 
	* ==> kube-controller-manager [8f34f0d629c73528c35d2d5429e3e4b6a15ff3297a076fe8833d8f8d554cce17] <==
	* E0814 09:48:26.224473       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.228643       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0814 09:48:26.304403       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.304652       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:48:26.305423       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.309238       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.309402       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.313321       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.316615       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.316618       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:48:26.323101       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.323153       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:48:26.327251       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:48:26.327254       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:48:26.412766       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-vrv5k"
	I0814 09:48:26.416415       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-g5rms"
	E0814 09:48:54.103007       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:48:54.514714       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0814 09:49:24.120495       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:49:24.528643       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0814 09:49:53.051629       1 node_lifecycle_controller.go:1107] Error updating node no-preload-20210814094108-6746: Timeout: request did not complete within requested timeout - context deadline exceeded
	E0814 09:49:54.131356       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:49:54.543734       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0814 09:50:24.150891       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:50:24.557407       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d908529b4ea55e5592dad5d27f811c597f7d45cb89679431c1438d3151f9517f] <==
	* I0814 09:48:27.037457       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:48:27.037498       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:48:27.037517       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0814 09:48:27.205376       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:48:27.205408       1 server_others.go:212] Using iptables Proxier.
	I0814 09:48:27.205421       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:48:27.205438       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:48:27.205854       1 server.go:649] Version: v1.22.0-rc.0
	I0814 09:48:27.206841       1 config.go:224] Starting endpoint slice config controller
	I0814 09:48:27.206877       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0814 09:48:27.206981       1 config.go:315] Starting service config controller
	I0814 09:48:27.206987       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0814 09:48:27.212096       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210814094108-6746.169b234dc86dbb15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03e023acc51449f, ext:284480115, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210814094108-6746", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"no-preload-20210814094108-6746", UID:"no-preload-20210814094108-6746", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210814094108-6746.169b234dc86dbb15" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0814 09:48:27.307055       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:48:27.307055       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4a0ace495389a3a4e6986522db1da8a85df590abc65496036bd14a30bbf35769] <==
	* I0814 09:48:09.020591       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0814 09:48:09.020665       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0814 09:48:09.023015       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:48:09.023085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.023241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.023292       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:48:09.023308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:48:09.023357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.023395       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:48:09.023536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:48:09.023548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:48:09.023579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:48:09.023643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:48:09.023694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:48:09.023738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:48:09.023795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:48:09.024171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 09:48:09.850208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:48:09.904384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:48:09.915320       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:48:09.966609       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:48:10.144055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:48:10.201977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:48:10.489524       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0814 09:48:10.620395       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:43:12 UTC, end at Sat 2021-08-14 09:50:34 UTC. --
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.866645    3567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/745b232d-997b-47fd-9540-725527a7c8e0-config-volume\") pod \"745b232d-997b-47fd-9540-725527a7c8e0\" (UID: \"745b232d-997b-47fd-9540-725527a7c8e0\") "
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.866695    3567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd2wv\" (UniqueName: \"kubernetes.io/projected/745b232d-997b-47fd-9540-725527a7c8e0-kube-api-access-jd2wv\") pod \"745b232d-997b-47fd-9540-725527a7c8e0\" (UID: \"745b232d-997b-47fd-9540-725527a7c8e0\") "
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: W0814 09:48:33.866956    3567 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/745b232d-997b-47fd-9540-725527a7c8e0/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.867077    3567 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745b232d-997b-47fd-9540-725527a7c8e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "745b232d-997b-47fd-9540-725527a7c8e0" (UID: "745b232d-997b-47fd-9540-725527a7c8e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.885143    3567 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/745b232d-997b-47fd-9540-725527a7c8e0-kube-api-access-jd2wv" (OuterVolumeSpecName: "kube-api-access-jd2wv") pod "745b232d-997b-47fd-9540-725527a7c8e0" (UID: "745b232d-997b-47fd-9540-725527a7c8e0"). InnerVolumeSpecName "kube-api-access-jd2wv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.916222    3567 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.967306    3567 reconciler.go:319] "Volume detached for volume \"kube-api-access-jd2wv\" (UniqueName: \"kubernetes.io/projected/745b232d-997b-47fd-9540-725527a7c8e0-kube-api-access-jd2wv\") on node \"no-preload-20210814094108-6746\" DevicePath \"\""
	Aug 14 09:48:33 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:33.967381    3567 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/745b232d-997b-47fd-9540-725527a7c8e0-config-volume\") on node \"no-preload-20210814094108-6746\" DevicePath \"\""
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: W0814 09:48:34.318144    3567 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod805798a2-948f-4d0c-a548-07118d846033/a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea WatchSource:0}: task a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea not found: not found
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.450184    3567 scope.go:110] "RemoveContainer" containerID="86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.451763    3567 scope.go:110] "RemoveContainer" containerID="a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:34.452046    3567 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-vrv5k_kubernetes-dashboard(805798a2-948f-4d0c-a548-07118d846033)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-vrv5k" podUID=805798a2-948f-4d0c-a548-07118d846033
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.457757    3567 scope.go:110] "RemoveContainer" containerID="86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:34.458207    3567 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": not found" containerID="86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101"
	Aug 14 09:48:34 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:34.458249    3567 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101} err="failed to get container status \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": rpc error: code = NotFound desc = an error occurred when try to find container \"86389dcf6689276b9a7b886999bc8d4efe8ee4a890cbf6a107f3079a6c9c1101\": not found"
	Aug 14 09:48:35 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:35.232259    3567 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=745b232d-997b-47fd-9540-725527a7c8e0 path="/var/lib/kubelet/pods/745b232d-997b-47fd-9540-725527a7c8e0/volumes"
	Aug 14 09:48:35 no-preload-20210814094108-6746 kubelet[3567]: I0814 09:48:35.454728    3567 scope.go:110] "RemoveContainer" containerID="a8c67ef87cdd14bd6d6362b9f1d74816e57e5020f3d6d3c6f71834ecdb4a85ea"
	Aug 14 09:48:35 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:35.454984    3567 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-vrv5k_kubernetes-dashboard(805798a2-948f-4d0c-a548-07118d846033)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-vrv5k" podUID=805798a2-948f-4d0c-a548-07118d846033
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300166    3567 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300210    3567 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300393    3567 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k55k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-rjgmp_kube-system(ca6ddeeb-6afd-4408-8ac8-39df00ec7dea): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 14 09:48:42 no-preload-20210814094108-6746 kubelet[3567]: E0814 09:48:42.300449    3567 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-rjgmp" podUID=ca6ddeeb-6afd-4408-8ac8-39df00ec7dea
	Aug 14 09:48:45 no-preload-20210814094108-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:48:45 no-preload-20210814094108-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:48:45 no-preload-20210814094108-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [ce735ece4a6084f683475bb130a0d3669b0d1586df3645a3fc48799401c6529e] <==
	* 2021/08/14 09:48:32 Using namespace: kubernetes-dashboard
	2021/08/14 09:48:32 Using in-cluster config to connect to apiserver
	2021/08/14 09:48:32 Using secret token for csrf signing
	2021/08/14 09:48:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:48:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:48:32 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/14 09:48:32 Generating JWE encryption key
	2021/08/14 09:48:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:48:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:48:32 Initializing JWE encryption key from synchronized object
	2021/08/14 09:48:32 Creating in-cluster Sidecar client
	2021/08/14 09:48:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:48:32 Serving insecurely on HTTP port: 9090
	2021/08/14 09:49:25 Metric client health check failed: an error on the server ("unknown") has prevented the request from succeeding (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:48:32 Starting overwatch
	
	* 
	* ==> storage-provisioner [bec3c33484023d0ccf5f8321a0a01994446d5609f693e1a7830ee8a260bbe392] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 95 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00013d790, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00013d780)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0005a85a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bef00, 0x18e5530, 0xc00058c800, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000591380)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000591380, 0x18b3d60, 0xc000315650, 0x1, 0xc00014a900)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000591380, 0x3b9aca00, 0x0, 0x1, 0xc00014a900)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000591380, 0x3b9aca00, 0xc00014a900)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:50:34.392909  236476 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/no-preload/serial/Pause (109.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-s5twx" [a8bd4234-6263-4b5b-a621-d2337301a035] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005733753s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Non-zero exit: kubectl --context embed-certs-20210814094325-6746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (54.383746ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
start_stop_delete_test.go:266: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20210814094325-6746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:270: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210814094325-6746
helpers_test.go:236: (dbg) docker inspect embed-certs-20210814094325-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576",
	        "Created": "2021-08-14T09:43:27.289846985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219779,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:45:14.416227785Z",
	            "FinishedAt": "2021-08-14T09:45:12.088163109Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hosts",
	        "LogPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576-json.log",
	        "Name": "/embed-certs-20210814094325-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210814094325-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210814094325-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210814094325-6746",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210814094325-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210814094325-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a6933adbbd3ce722a675e5adeafc189199fe1d8fada7eebf787d37f915e239a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32948"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1a6933adbbd3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210814094325-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d2385af2cb05"
	                    ],
	                    "NetworkID": "dbc6f9acad495850f4b0b885d051bfbd2cce05a9032571d93062419b0fbb36d2",
	                    "EndpointID": "8a463f9704f74ef43e52668f81108caad669353d43761c271d2d4d574c959212",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210814094325-6746 logs -n 25
E0814 09:51:07.559210    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
helpers_test.go:253: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p pause-20210814093545-6746                      | pause-20210814093545-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:07 UTC | Sat, 14 Aug 2021 09:41:08 UTC |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:16 UTC | Sat, 14 Aug 2021 09:41:17 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:17 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                           |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                           |         |         |                               |                               |
	|         | --keep-context=false                              |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                           |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:18 UTC | Sat, 14 Aug 2021 09:43:19 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:20 UTC | Sat, 14 Aug 2021 09:43:21 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:21 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                   | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:50 UTC | Sat, 14 Aug 2021 09:44:51 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:51 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:48:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:48:45 UTC | Sat, 14 Aug 2021 09:48:45 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:34 UTC | Sat, 14 Aug 2021 09:50:38 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:38 UTC | Sat, 14 Aug 2021 09:50:39 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210814095039-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:39 UTC | Sat, 14 Aug 2021 09:50:40 UTC |
	|         | disable-driver-mounts-20210814095039-6746         |                                           |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:50:56 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                           |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:50:40
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:50:40.078160  242948 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:50:40.078244  242948 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:50:40.078254  242948 out.go:311] Setting ErrFile to fd 2...
	I0814 09:50:40.078258  242948 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:50:40.078366  242948 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:50:40.078628  242948 out.go:305] Setting JSON to false
	I0814 09:50:40.119352  242948 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5602,"bootTime":1628929038,"procs":276,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:50:40.119448  242948 start.go:121] virtualization: kvm guest
	I0814 09:50:40.122500  242948 out.go:177] * [default-k8s-different-port-20210814095040-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:50:40.124210  242948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:50:40.122699  242948 notify.go:169] Checking for updates...
	I0814 09:50:40.125676  242948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:50:40.127206  242948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:50:40.128678  242948 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:50:40.129277  242948 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:50:40.129398  242948 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:50:40.129490  242948 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:50:40.129531  242948 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:50:40.189038  242948 docker.go:132] docker version: linux-19.03.15
	I0814 09:50:40.189164  242948 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:50:40.289687  242948 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:50:40.235277528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:50:40.289786  242948 docker.go:244] overlay module found
	I0814 09:50:40.291517  242948 out.go:177] * Using the docker driver based on user configuration
	I0814 09:50:40.291541  242948 start.go:278] selected driver: docker
	I0814 09:50:40.291546  242948 start.go:751] validating driver "docker" against <nil>
	I0814 09:50:40.291562  242948 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:50:40.291608  242948 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:50:40.291627  242948 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:50:40.292971  242948 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:50:40.293780  242948 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:50:40.384600  242948 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:50:40.338311012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:50:40.384710  242948 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:50:40.384920  242948 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:50:40.384946  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:50:40.384954  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:50:40.384964  242948 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:50:40.384974  242948 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:50:40.384981  242948 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:50:40.384991  242948 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:50:40.387054  242948 out.go:177] * Starting control plane node default-k8s-different-port-20210814095040-6746 in cluster default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.387087  242948 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:50:40.388430  242948 out.go:177] * Pulling base image ...
	I0814 09:50:40.388458  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:40.388489  242948 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:50:40.388500  242948 cache.go:56] Caching tarball of preloaded images
	I0814 09:50:40.388547  242948 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:50:40.388667  242948 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:50:40.388684  242948 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:50:40.388818  242948 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json ...
	I0814 09:50:40.388847  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json: {Name:mk37096ce7d1c408ab2119b9d1016f0ec54511d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:40.477442  242948 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:50:40.477474  242948 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:50:40.477489  242948 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:50:40.477530  242948 start.go:313] acquiring machines lock for default-k8s-different-port-20210814095040-6746: {Name:mke7f558db837977766a2f1aff9770a5c1ff83a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:50:40.477640  242948 start.go:317] acquired machines lock for "default-k8s-different-port-20210814095040-6746" in 92.564µs
	I0814 09:50:40.477663  242948 start.go:89] Provisioning new machine with config: &{Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040
-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:50:40.477786  242948 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:50:37.722771  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:39.723935  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:41.724562  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:40.480990  242948 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0814 09:50:40.481210  242948 start.go:160] libmachine.API.Create for "default-k8s-different-port-20210814095040-6746" (driver="docker")
	I0814 09:50:40.481241  242948 client.go:168] LocalClient.Create starting
	I0814 09:50:40.481338  242948 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:50:40.481371  242948 main.go:130] libmachine: Decoding PEM data...
	I0814 09:50:40.481389  242948 main.go:130] libmachine: Parsing certificate...
	I0814 09:50:40.481488  242948 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:50:40.481510  242948 main.go:130] libmachine: Decoding PEM data...
	I0814 09:50:40.481528  242948 main.go:130] libmachine: Parsing certificate...
	I0814 09:50:40.481849  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:50:40.525737  242948 cli_runner.go:162] docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:50:40.525811  242948 network_create.go:255] running [docker network inspect default-k8s-different-port-20210814095040-6746] to gather additional debugging logs...
	I0814 09:50:40.525834  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746
	W0814 09:50:40.571336  242948 cli_runner.go:162] docker network inspect default-k8s-different-port-20210814095040-6746 returned with exit code 1
	I0814 09:50:40.571370  242948 network_create.go:258] error running [docker network inspect default-k8s-different-port-20210814095040-6746]: docker network inspect default-k8s-different-port-20210814095040-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.571397  242948 network_create.go:260] output of [docker network inspect default-k8s-different-port-20210814095040-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20210814095040-6746
	
	** /stderr **
	I0814 09:50:40.571446  242948 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:50:40.614259  242948 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000114740] misses:0}
	I0814 09:50:40.614311  242948 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:50:40.614327  242948 network_create.go:106] attempt to create docker network default-k8s-different-port-20210814095040-6746 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0814 09:50:40.614367  242948 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.694116  242948 network_create.go:90] docker network default-k8s-different-port-20210814095040-6746 192.168.49.0/24 created
	I0814 09:50:40.694168  242948 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20210814095040-6746" container
	I0814 09:50:40.694232  242948 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:50:40.739929  242948 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20210814095040-6746 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:50:40.779996  242948 oci.go:102] Successfully created a docker volume default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.780078  242948 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20210814095040-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --entrypoint /usr/bin/test -v default-k8s-different-port-20210814095040-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:50:41.548292  242948 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20210814095040-6746
	W0814 09:50:41.548348  242948 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:50:41.548361  242948 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:50:41.548375  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:41.548406  242948 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:50:41.548418  242948 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:50:41.548477  242948 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210814095040-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:50:41.636582  242948 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20210814095040-6746 --name default-k8s-different-port-20210814095040-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --network default-k8s-different-port-20210814095040-6746 --ip 192.168.49.2 --volume default-k8s-different-port-20210814095040-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:50:42.144372  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Running}}
	I0814 09:50:42.192564  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:42.243507  242948 cli_runner.go:115] Run: docker exec default-k8s-different-port-20210814095040-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:50:42.382169  242948 oci.go:278] the created container "default-k8s-different-port-20210814095040-6746" has a running status.
	I0814 09:50:42.382207  242948 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa...
	I0814 09:50:42.445995  242948 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:50:42.839357  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:42.883219  242948 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:50:42.883247  242948 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210814095040-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:50:44.223377  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:46.723426  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:45.601178  242948 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210814095040-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.052608109s)
	I0814 09:50:45.601212  242948 kic.go:188] duration metric: took 4.052804 seconds to extract preloaded images to volume
	I0814 09:50:45.601281  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:45.639926  242948 machine.go:88] provisioning docker machine ...
	I0814 09:50:45.639958  242948 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210814095040-6746"
	I0814 09:50:45.640004  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:45.677111  242948 main.go:130] libmachine: Using SSH client type: native
	I0814 09:50:45.677287  242948 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I0814 09:50:45.677302  242948 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210814095040-6746 && echo "default-k8s-different-port-20210814095040-6746" | sudo tee /etc/hostname
	I0814 09:50:45.811627  242948 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210814095040-6746
	
	I0814 09:50:45.811696  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:45.851031  242948 main.go:130] libmachine: Using SSH client type: native
	I0814 09:50:45.851173  242948 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I0814 09:50:45.851198  242948 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210814095040-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210814095040-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210814095040-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:50:45.971942  242948 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:50:45.971970  242948 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:50:45.972021  242948 ubuntu.go:177] setting up certificates
	I0814 09:50:45.972032  242948 provision.go:83] configureAuth start
	I0814 09:50:45.972081  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.010084  242948 provision.go:138] copyHostCerts
	I0814 09:50:46.010154  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:50:46.010173  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:50:46.010236  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:50:46.010318  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:50:46.010330  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:50:46.010360  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:50:46.010420  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:50:46.010429  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:50:46.010454  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:50:46.010510  242948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210814095040-6746 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210814095040-6746]
	I0814 09:50:46.128382  242948 provision.go:172] copyRemoteCerts
	I0814 09:50:46.128444  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:50:46.128496  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.168879  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.259375  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:50:46.275048  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0814 09:50:46.289966  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:50:46.304803  242948 provision.go:86] duration metric: configureAuth took 332.753132ms
	I0814 09:50:46.304824  242948 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:50:46.304953  242948 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:50:46.304963  242948 machine.go:91] provisioned docker machine in 665.019762ms
	I0814 09:50:46.304997  242948 client.go:171] LocalClient.Create took 5.823722197s
	I0814 09:50:46.305019  242948 start.go:168] duration metric: libmachine.API.Create for "default-k8s-different-port-20210814095040-6746" took 5.823809022s
	I0814 09:50:46.305031  242948 start.go:267] post-start starting for "default-k8s-different-port-20210814095040-6746" (driver="docker")
	I0814 09:50:46.305037  242948 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:50:46.305081  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:50:46.305111  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.345433  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.439381  242948 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:50:46.441947  242948 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:50:46.441969  242948 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:50:46.441986  242948 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:50:46.441995  242948 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:50:46.442005  242948 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:50:46.442045  242948 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:50:46.442142  242948 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:50:46.442245  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:50:46.448155  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:50:46.463470  242948 start.go:270] post-start completed in 158.429661ms
	I0814 09:50:46.463755  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.503225  242948 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json ...
	I0814 09:50:46.503476  242948 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:50:46.503519  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.540563  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.625051  242948 start.go:129] duration metric: createHost completed in 6.147253606s
	I0814 09:50:46.625076  242948 start.go:80] releasing machines lock for "default-k8s-different-port-20210814095040-6746", held for 6.147423912s
	I0814 09:50:46.625156  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.664584  242948 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:50:46.664645  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.664652  242948 ssh_runner.go:149] Run: systemctl --version
	I0814 09:50:46.664697  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.707048  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.707255  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.821908  242948 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:50:46.831031  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:50:46.839119  242948 docker.go:153] disabling docker service ...
	I0814 09:50:46.839168  242948 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:50:46.853257  242948 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:50:46.861067  242948 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:50:46.923543  242948 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:50:46.978774  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:50:46.986852  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:50:46.997957  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:50:47.009652  242948 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:50:47.015207  242948 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:50:47.015245  242948 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:50:47.021672  242948 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:50:47.027332  242948 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:50:47.082686  242948 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:50:47.143652  242948 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:50:47.143716  242948 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:50:47.146808  242948 start.go:413] Will wait 60s for crictl version
	I0814 09:50:47.146863  242948 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:50:47.169179  242948 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:50:47Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:50:49.222994  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:51.223160  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:53.223991  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:54.720501  219213 pod_ready.go:97] error getting pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-c7vfk" not found
	I0814 09:50:54.720529  219213 pod_ready.go:81] duration metric: took 21.007916602s waiting for pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace to be "Ready" ...
	E0814 09:50:54.720541  219213 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-c7vfk" not found
	I0814 09:50:54.720550  219213 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.724877  219213 pod_ready.go:92] pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.724893  219213 pod_ready.go:81] duration metric: took 4.331809ms waiting for pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.724903  219213 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.728530  219213 pod_ready.go:92] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.728548  219213 pod_ready.go:81] duration metric: took 3.638427ms waiting for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.728567  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.732170  219213 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.732186  219213 pod_ready.go:81] duration metric: took 3.612156ms waiting for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.732196  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.735668  219213 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.735682  219213 pod_ready.go:81] duration metric: took 3.480378ms waiting for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.735691  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xcshh" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.920884  219213 pod_ready.go:92] pod "kube-proxy-xcshh" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.920903  219213 pod_ready.go:81] duration metric: took 185.206559ms waiting for pod "kube-proxy-xcshh" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.920913  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:55.321495  219213 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:55.321518  219213 pod_ready.go:81] duration metric: took 400.598171ms waiting for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:55.321529  219213 pod_ready.go:38] duration metric: took 21.625428997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:50:55.321547  219213 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:50:55.321592  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:50:55.343154  219213 api_server.go:70] duration metric: took 21.759811987s to wait for apiserver process to appear ...
	I0814 09:50:55.343176  219213 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:50:55.343186  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:50:55.347349  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:50:55.348107  219213 api_server.go:139] control plane version: v1.21.3
	I0814 09:50:55.348127  219213 api_server.go:129] duration metric: took 4.944829ms to wait for apiserver health ...
	I0814 09:50:55.348136  219213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:50:55.523276  219213 system_pods.go:59] 9 kube-system pods found
	I0814 09:50:55.523298  219213 system_pods.go:61] "coredns-558bd4d5db-wjlqr" [052286f2-685f-4520-8f5c-13e35b07e27e] Running
	I0814 09:50:55.523303  219213 system_pods.go:61] "etcd-embed-certs-20210814094325-6746" [62f460fe-d11d-4e50-a549-f9a153888a5d] Running
	I0814 09:50:55.523306  219213 system_pods.go:61] "kindnet-kvv65" [c0bc8515-5565-4fb1-a82d-d01bc090d641] Running
	I0814 09:50:55.523311  219213 system_pods.go:61] "kube-apiserver-embed-certs-20210814094325-6746" [04913668-df62-4d8e-8166-fe1aaf7ba56b] Running
	I0814 09:50:55.523316  219213 system_pods.go:61] "kube-controller-manager-embed-certs-20210814094325-6746" [57b659e5-19e6-415a-995a-3e92b39b5a41] Running
	I0814 09:50:55.523319  219213 system_pods.go:61] "kube-proxy-xcshh" [cbf58cc2-48cb-4eca-8d30-904694fbb480] Running
	I0814 09:50:55.523323  219213 system_pods.go:61] "kube-scheduler-embed-certs-20210814094325-6746" [ccdd236c-694a-4805-a6cd-7fa58b99395e] Running
	I0814 09:50:55.523332  219213 system_pods.go:61] "metrics-server-7c784ccb57-5nrfw" [5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:50:55.523338  219213 system_pods.go:61] "storage-provisioner" [3f6d1385-66f0-49e9-a561-d557c138f7b6] Running
	I0814 09:50:55.523344  219213 system_pods.go:74] duration metric: took 175.203116ms to wait for pod list to return data ...
	I0814 09:50:55.523353  219213 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:50:55.721303  219213 default_sa.go:45] found service account: "default"
	I0814 09:50:55.721329  219213 default_sa.go:55] duration metric: took 197.969622ms for default service account to be created ...
	I0814 09:50:55.721339  219213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:50:55.923597  219213 system_pods.go:86] 9 kube-system pods found
	I0814 09:50:55.923624  219213 system_pods.go:89] "coredns-558bd4d5db-wjlqr" [052286f2-685f-4520-8f5c-13e35b07e27e] Running
	I0814 09:50:55.923629  219213 system_pods.go:89] "etcd-embed-certs-20210814094325-6746" [62f460fe-d11d-4e50-a549-f9a153888a5d] Running
	I0814 09:50:55.923634  219213 system_pods.go:89] "kindnet-kvv65" [c0bc8515-5565-4fb1-a82d-d01bc090d641] Running
	I0814 09:50:55.923639  219213 system_pods.go:89] "kube-apiserver-embed-certs-20210814094325-6746" [04913668-df62-4d8e-8166-fe1aaf7ba56b] Running
	I0814 09:50:55.923644  219213 system_pods.go:89] "kube-controller-manager-embed-certs-20210814094325-6746" [57b659e5-19e6-415a-995a-3e92b39b5a41] Running
	I0814 09:50:55.923651  219213 system_pods.go:89] "kube-proxy-xcshh" [cbf58cc2-48cb-4eca-8d30-904694fbb480] Running
	I0814 09:50:55.923655  219213 system_pods.go:89] "kube-scheduler-embed-certs-20210814094325-6746" [ccdd236c-694a-4805-a6cd-7fa58b99395e] Running
	I0814 09:50:55.923663  219213 system_pods.go:89] "metrics-server-7c784ccb57-5nrfw" [5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:50:55.923670  219213 system_pods.go:89] "storage-provisioner" [3f6d1385-66f0-49e9-a561-d557c138f7b6] Running
	I0814 09:50:55.923677  219213 system_pods.go:126] duration metric: took 202.332969ms to wait for k8s-apps to be running ...
	I0814 09:50:55.923687  219213 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:50:55.923726  219213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:50:55.932502  219213 system_svc.go:56] duration metric: took 8.810307ms WaitForService to wait for kubelet.
	I0814 09:50:55.932525  219213 kubeadm.go:547] duration metric: took 22.349186518s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:50:55.932552  219213 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:50:56.122070  219213 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:50:56.122094  219213 node_conditions.go:123] node cpu capacity is 8
	I0814 09:50:56.122107  219213 node_conditions.go:105] duration metric: took 189.549407ms to run NodePressure ...
	I0814 09:50:56.122116  219213 start.go:231] waiting for startup goroutines ...
	I0814 09:50:56.165685  219213 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:50:56.167880  219213 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210814094325-6746" cluster and "default" namespace by default
	I0814 09:50:58.219059  242948 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:50:58.309835  242948 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:50:58.309907  242948 ssh_runner.go:149] Run: containerd --version
	I0814 09:50:58.330960  242948 ssh_runner.go:149] Run: containerd --version
	I0814 09:50:58.353208  242948 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:50:58.353286  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:50:58.391168  242948 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0814 09:50:58.394266  242948 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:50:58.403013  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:58.403075  242948 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:50:58.424249  242948 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:50:58.424264  242948 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:50:58.424296  242948 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:50:58.444133  242948 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:50:58.444150  242948 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:50:58.444182  242948 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:50:58.464037  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:50:58.464053  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:50:58.464062  242948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:50:58.464075  242948 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210814095040-6746 NodeName:default-k8s-different-port-20210814095040-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:50:58.464192  242948 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210814095040-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:50:58.464276  242948 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210814095040-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0814 09:50:58.464314  242948 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:50:58.470390  242948 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:50:58.470444  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:50:58.476398  242948 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0814 09:50:58.487492  242948 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:50:58.498448  242948 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0814 09:50:58.509594  242948 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:50:58.512068  242948 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:50:58.520004  242948 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746 for IP: 192.168.49.2
	I0814 09:50:58.520042  242948 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:50:58.520057  242948 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:50:58.520106  242948 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key
	I0814 09:50:58.520115  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt with IP's: []
	I0814 09:50:58.605811  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt ...
	I0814 09:50:58.605832  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: {Name:mkacaae754c3f3d8a12af248e60d4f2dfeb1fcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.605983  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key ...
	I0814 09:50:58.605995  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key: {Name:mkc908febb624f8dcae4593839bc3cdd86a1ad31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.606077  242948 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2
	I0814 09:50:58.606087  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:50:58.792141  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 ...
	I0814 09:50:58.792164  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2: {Name:mk1ce321d1a9a1e324dde7b9a016555ddd6031d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.792303  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2 ...
	I0814 09:50:58.792317  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2: {Name:mk8ca5617228b674c440c829a6a0ed6ba7adf225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.792390  242948 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt
	I0814 09:50:58.792489  242948 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key
	I0814 09:50:58.792543  242948 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key
	I0814 09:50:58.792551  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt with IP's: []
	I0814 09:50:58.996340  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt ...
	I0814 09:50:58.996371  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt: {Name:mk55a688d6c41aa245f7d2d45cd1b092fbfe314a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.996534  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key ...
	I0814 09:50:58.996546  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key: {Name:mk240bac2833d6f959e53ffe7865c747fc43bc7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.996701  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:50:58.996736  242948 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:50:58.996746  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:50:58.996770  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:50:58.996833  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:50:58.996856  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:50:58.996899  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:50:58.997780  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:50:59.014380  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:50:59.029550  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:50:59.045055  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:50:59.060084  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:50:59.074803  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:50:59.089518  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:50:59.104113  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:50:59.119093  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:50:59.134192  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:50:59.150511  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:50:59.165586  242948 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:50:59.176450  242948 ssh_runner.go:149] Run: openssl version
	I0814 09:50:59.180631  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:50:59.187025  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.189664  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.189709  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.195171  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:50:59.201806  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:50:59.208282  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.211124  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.211160  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.215395  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:50:59.221855  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:50:59.228613  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.231438  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.231472  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.236446  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:50:59.243050  242948 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:50:59.243131  242948 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:50:59.243196  242948 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:50:59.266979  242948 cri.go:76] found id: ""
	I0814 09:50:59.267040  242948 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:50:59.273892  242948 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:50:59.280356  242948 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:50:59.280407  242948 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:50:59.286903  242948 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:50:59.286944  242948 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:50:59.556038  242948 out.go:204]   - Generating certificates and keys ...
	I0814 09:51:02.082739  242948 out.go:204]   - Booting up control plane ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	1862174cc5e0e       523cad1a4df73       8 seconds ago       Exited              dashboard-metrics-scraper   2                   913463a8ccf64
	efd9db92085eb       9a07b5b4bfac0       28 seconds ago      Running             kubernetes-dashboard        0                   7e54e81316330
	8ef370c3ebb55       6e38f40d628db       30 seconds ago      Running             storage-provisioner         0                   3cad7404cb4f2
	c2051bc4ae872       296a6d5035e2d       31 seconds ago      Running             coredns                     0                   021199ccb79c5
	c145e33f75c99       6de166512aa22       32 seconds ago      Running             kindnet-cni                 0                   c9bd7ea8434fc
	f1468c559df50       adb2816ea823a       33 seconds ago      Running             kube-proxy                  0                   5d392d166e020
	df5983ab2d3f9       6be0dc1302e30       55 seconds ago      Running             kube-scheduler              0                   09addeb11662d
	36ad15e77f314       bc2bb319a7038       55 seconds ago      Running             kube-controller-manager     0                   1c88459572814
	a2029d64830b8       3d174f00aa39e       55 seconds ago      Running             kube-apiserver              0                   07ce24f2cc07a
	90b59592c23d4       0369cf4303ffd       55 seconds ago      Running             etcd                        0                   247a0e8677d04
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:45:14 UTC, end at Sat 2021-08-14 09:51:07 UTC. --
	Aug 14 09:50:41 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:41.960862226Z" level=info msg="Container to stop \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.062121875Z" level=info msg="TaskExit event &TaskExit{ContainerID:c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65,ID:c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65,Pid:5654,ExitStatus:137,ExitedAt:2021-08-14 09:50:42.061866149 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.101739951Z" level=info msg="shim disconnected" id=c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.101820216Z" level=error msg="copy shim log" error="read /proc/self/fd/83: file already closed"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.188891167Z" level=info msg="TearDown network for sandbox \"c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65\" successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.188934907Z" level=info msg="StopPodSandbox for \"c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65\" returns successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.515303799Z" level=info msg="RemoveContainer for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.520544783Z" level=info msg="RemoveContainer for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" returns successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.521181572Z" level=error msg="ContainerStatus for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\": not found"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.522692420Z" level=info msg="RemoveContainer for \"6bf02b7a63d0e39db037c2f3b89fba9d4dfee9f801d768b39968f72bd9d2b45a\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.527393348Z" level=info msg="RemoveContainer for \"6bf02b7a63d0e39db037c2f3b89fba9d4dfee9f801d768b39968f72bd9d2b45a\" returns successfully"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.227521500Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.294737474Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.295926302Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.227729472Z" level=info msg="CreateContainer within sandbox \"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,}"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.262462808Z" level=info msg="CreateContainer within sandbox \"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,} returns container id \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.262950261Z" level=info msg="StartContainer for \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.434757635Z" level=info msg="StartContainer for \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\" returns successfully"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469151595Z" level=info msg="Finish piping stderr of container \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469184697Z" level=info msg="Finish piping stdout of container \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469980873Z" level=info msg="TaskExit event &TaskExit{ContainerID:1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82,ID:1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82,Pid:6740,ExitStatus:1,ExitedAt:2021-08-14 09:50:59.469759043 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.505397460Z" level=info msg="shim disconnected" id=1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.505481929Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.547120670Z" level=info msg="RemoveContainer for \"8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.551904618Z" level=info msg="RemoveContainer for \"8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0\" returns successfully"
	
	* 
	* ==> coredns [c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210814094325-6746
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210814094325-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=embed-certs-20210814094325-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_50_19_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:50:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210814094325-6746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Aug 2021 09:51:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:50:54 +0000   Sat, 14 Aug 2021 09:50:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:50:54 +0000   Sat, 14 Aug 2021 09:50:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:50:54 +0000   Sat, 14 Aug 2021 09:50:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:50:54 +0000   Sat, 14 Aug 2021 09:50:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20210814094325-6746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                8b12fde3-d85f-4477-8bb2-011e8d6b01bd
	  Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-wjlqr                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     35s
	  kube-system                 etcd-embed-certs-20210814094325-6746                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         43s
	  kube-system                 kindnet-kvv65                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      35s
	  kube-system                 kube-apiserver-embed-certs-20210814094325-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-controller-manager-embed-certs-20210814094325-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-proxy-xcshh                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-scheduler-embed-certs-20210814094325-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 metrics-server-7c784ccb57-5nrfw                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         32s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-gmk5j                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-s5twx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 57s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x5 over 57s)  kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x5 over 57s)  kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x4 over 57s)  kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 43s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                36s                kubelet     Node embed-certs-20210814094325-6746 status is now: NodeReady
	  Normal  Starting                 33s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug14 09:46] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dbc6f9acad49
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[  +0.000002] ll header: 00000000: 02 42 69 b4 c4 ef 02 42 c0 a8 3a 02 08 00        .Bi....B..:...
	[Aug14 09:48] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth3dde905c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 96 ee 68 e6 84 31 08 06        ........h..1..
	[  +0.032259] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha730867e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 b0 2c 69 36 56 08 06        ........,i6V..
	[  +0.715640] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth2cf9a783
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 ed 1c 18 61 89 08 06        ..........a...
	[  +0.453803] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethfd647b8c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e c9 5e 1b 0b 08 08 06        ........^.....
	[  +0.238950] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth66c80aa5
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 9d a2 94 49 09 08 06        ......B...I...
	[Aug14 09:50] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth219d8885
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 72 ae 3d be 32 47 08 06        ......r.=.2G..
	[  +0.407019] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethda4d8623
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff f2 c4 73 9e f2 b3 08 06        ........s.....
	[  +1.892879] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethbc400799
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 5a 18 0b d4 f0 08 06        .......Z......
	[  +0.451541] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethf3fb868f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 46 cd a5 37 a9 08 06        .......F..7...
	[  +0.899820] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth117eea46
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e bd 0c 0c 46 f1 08 06        ..........F...
	[  +4.461460] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55] <==
	* 2021-08-14 09:50:11.649106 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-14 09:50:11.649343 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)
	2021-08-14 09:50:11.649615 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-14 09:50:11.651328 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-14 09:50:11.651385 I | embed: listening for peers on 192.168.58.2:2380
	2021-08-14 09:50:11.651465 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 is starting a new election at term 1
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 became candidate at term 2
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 became leader at term 2
	raft2021/08/14 09:50:11 INFO: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-14 09:50:11.942588 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-14 09:50:11.943313 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-14 09:50:11.943369 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-14 09:50:11.943426 I | etcdserver: published {Name:embed-certs-20210814094325-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:50:11.943442 I | embed: ready to serve client requests
	2021-08-14 09:50:11.943553 I | embed: ready to serve client requests
	2021-08-14 09:50:11.945597 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-14 09:50:11.950474 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:50:29.041684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:34.901449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:44.865083 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:54.864171 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:51:04.864015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  09:51:07 up  1:33,  0 users,  load average: 0.87, 1.39, 1.68
	Linux embed-certs-20210814094325-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83] <==
	* I0814 09:50:15.812197       1 controller.go:611] quota admission added evaluator for: namespaces
	I0814 09:50:15.815631       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0814 09:50:16.706666       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0814 09:50:16.706688       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0814 09:50:16.710954       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0814 09:50:16.713752       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0814 09:50:16.713774       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0814 09:50:17.115172       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 09:50:17.142416       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0814 09:50:17.224046       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0814 09:50:17.224850       1 controller.go:611] quota admission added evaluator for: endpoints
	I0814 09:50:17.227906       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 09:50:18.366040       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 09:50:18.739842       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0814 09:50:18.813747       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 09:50:24.198880       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 09:50:32.545388       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0814 09:50:32.795460       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0814 09:50:37.907551       1 handler_proxy.go:102] no RequestInfo found in the context
	E0814 09:50:37.907637       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0814 09:50:37.907647       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 09:50:49.480089       1 client.go:360] parsed scheme: "passthrough"
	I0814 09:50:49.480137       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0814 09:50:49.480147       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee] <==
	* I0814 09:50:33.088763       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-c7vfk"
	I0814 09:50:35.313219       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0814 09:50:35.331213       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:50:35.418588       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:50:35.426804       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-5nrfw"
	I0814 09:50:36.029281       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0814 09:50:36.117463       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:50:36.121445       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0814 09:50:36.122885       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.127069       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.129271       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.129538       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.203476       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:50:36.205802       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.206089       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.209774       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.210060       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.211663       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.211731       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.213131       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.213178       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:50:36.224567       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-s5twx"
	I0814 09:50:36.308419       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-gmk5j"
	E0814 09:51:02.307907       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:51:02.731841       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9] <==
	* I0814 09:50:34.212539       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:50:34.212604       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:50:34.212632       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:50:34.412568       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:50:34.412610       1 server_others.go:212] Using iptables Proxier.
	I0814 09:50:34.412625       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:50:34.412658       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:50:34.413285       1 server.go:643] Version: v1.21.3
	I0814 09:50:34.420355       1 config.go:315] Starting service config controller
	I0814 09:50:34.420380       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:50:34.420424       1 config.go:224] Starting endpoint slice config controller
	I0814 09:50:34.420433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:50:34.425120       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:50:34.431401       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:50:34.520879       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:50:34.520934       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799] <==
	* W0814 09:50:15.808240       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 09:50:15.808258       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 09:50:15.808266       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 09:50:15.824968       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:50:15.825017       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0814 09:50:15.825026       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:50:15.826778       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 09:50:15.905345       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:50:15.909182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909298       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:50:15.909404       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909484       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:50:15.909560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:50:15.909645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:50:15.909712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:50:15.909763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:50:15.909808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909855       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:50:15.909905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:50:15.909963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:50:15.910012       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:16.719502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:16.731061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:50:16.736953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0814 09:50:17.527017       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:45:14 UTC, end at Sat 2021-08-14 09:51:07 UTC. --
	Aug 14 09:50:42 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:42.521497    4845 scope.go:111] "RemoveContainer" containerID="6bf02b7a63d0e39db037c2f3b89fba9d4dfee9f801d768b39968f72bd9d2b45a"
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.240955    4845 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxgqp\" (UniqueName: \"kubernetes.io/projected/4259e384-ec50-41ce-9c60-fb8ed66f2b71-kube-api-access-nxgqp\") pod \"4259e384-ec50-41ce-9c60-fb8ed66f2b71\" (UID: \"4259e384-ec50-41ce-9c60-fb8ed66f2b71\") "
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.241003    4845 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4259e384-ec50-41ce-9c60-fb8ed66f2b71-config-volume\") pod \"4259e384-ec50-41ce-9c60-fb8ed66f2b71\" (UID: \"4259e384-ec50-41ce-9c60-fb8ed66f2b71\") "
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: W0814 09:50:43.241251    4845 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/4259e384-ec50-41ce-9c60-fb8ed66f2b71/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.241373    4845 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e384-ec50-41ce-9c60-fb8ed66f2b71-config-volume" (OuterVolumeSpecName: "config-volume") pod "4259e384-ec50-41ce-9c60-fb8ed66f2b71" (UID: "4259e384-ec50-41ce-9c60-fb8ed66f2b71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.265204    4845 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4259e384-ec50-41ce-9c60-fb8ed66f2b71-kube-api-access-nxgqp" (OuterVolumeSpecName: "kube-api-access-nxgqp") pod "4259e384-ec50-41ce-9c60-fb8ed66f2b71" (UID: "4259e384-ec50-41ce-9c60-fb8ed66f2b71"). InnerVolumeSpecName "kube-api-access-nxgqp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.342257    4845 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4259e384-ec50-41ce-9c60-fb8ed66f2b71-config-volume\") on node \"embed-certs-20210814094325-6746\" DevicePath \"\""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.342294    4845 reconciler.go:319] "Volume detached for volume \"kube-api-access-nxgqp\" (UniqueName: \"kubernetes.io/projected/4259e384-ec50-41ce-9c60-fb8ed66f2b71-kube-api-access-nxgqp\") on node \"embed-certs-20210814094325-6746\" DevicePath \"\""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.519347    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:43.519720    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: W0814 09:50:43.865362    4845 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod054dc08e-4bc7-4ae9-adf6-55f654ff6b86/8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0 WatchSource:0}: task 8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0 not found: not found
	Aug 14 09:50:46 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:46.321535    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:46 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:46.321811    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296140    4845 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296185    4845 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296308    4845 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vzhhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-5nrfw_kube-system(5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296350    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-5nrfw" podUID=5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.225580    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.546177    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.546474    4845 scope.go:111] "RemoveContainer" containerID="1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:59.546875    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:51:00 embed-certs-20210814094325-6746 kubelet[4845]: W0814 09:51:00.785937    4845 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod054dc08e-4bc7-4ae9-adf6-55f654ff6b86/1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82 WatchSource:0}: task 1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82 not found: not found
	Aug 14 09:51:02 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:51:02.226698    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-5nrfw" podUID=5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7
	Aug 14 09:51:06 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:51:06.321486    4845 scope.go:111] "RemoveContainer" containerID="1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	Aug 14 09:51:06 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:51:06.321858    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	
	* 
	* ==> kubernetes-dashboard [efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af] <==
	* 2021/08/14 09:50:38 Starting overwatch
	2021/08/14 09:50:38 Using namespace: kubernetes-dashboard
	2021/08/14 09:50:38 Using in-cluster config to connect to apiserver
	2021/08/14 09:50:38 Using secret token for csrf signing
	2021/08/14 09:50:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:50:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:50:38 Successful initial request to the apiserver, version: v1.21.3
	2021/08/14 09:50:38 Generating JWE encryption key
	2021/08/14 09:50:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:50:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:50:39 Initializing JWE encryption key from synchronized object
	2021/08/14 09:50:39 Creating in-cluster Sidecar client
	2021/08/14 09:50:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:50:39 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b] <==
	* I0814 09:50:36.806345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:50:36.814591       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:50:36.814644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:50:36.820487       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:50:36.820644       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e!
	I0814 09:50:36.821582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a0b34a7-1595-4e6c-a60d-8ec24e8b8d67", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e became leader
	I0814 09:50:36.921284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-5nrfw
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 describe pod metrics-server-7c784ccb57-5nrfw
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210814094325-6746 describe pod metrics-server-7c784ccb57-5nrfw: exit status 1 (102.784702ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-5nrfw" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210814094325-6746 describe pod metrics-server-7c784ccb57-5nrfw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (115.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210814094325-6746 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210814094325-6746 --alsologtostderr -v=1: exit status 80 (1.913756192s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210814094325-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:51:08.727528  246540 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:51:08.727623  246540 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:51:08.727631  246540 out.go:311] Setting ErrFile to fd 2...
	I0814 09:51:08.727636  246540 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:51:08.727799  246540 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:51:08.728016  246540 out.go:305] Setting JSON to false
	I0814 09:51:08.728040  246540 mustload.go:65] Loading cluster: embed-certs-20210814094325-6746
	I0814 09:51:08.728487  246540 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:51:08.729078  246540 cli_runner.go:115] Run: docker container inspect embed-certs-20210814094325-6746 --format={{.State.Status}}
	I0814 09:51:08.774718  246540 host.go:66] Checking if "embed-certs-20210814094325-6746" exists ...
	I0814 09:51:08.776552  246540 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210814094325-6746 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0814 09:51:08.779941  246540 out.go:177] * Pausing node embed-certs-20210814094325-6746 ... 
	I0814 09:51:08.779965  246540 host.go:66] Checking if "embed-certs-20210814094325-6746" exists ...
	I0814 09:51:08.780197  246540 ssh_runner.go:149] Run: systemctl --version
	I0814 09:51:08.780230  246540 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210814094325-6746
	I0814 09:51:08.818528  246540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/embed-certs-20210814094325-6746/id_rsa Username:docker}
	I0814 09:51:08.912563  246540 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:08.922424  246540 pause.go:50] kubelet running: true
	I0814 09:51:08.922475  246540 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:51:09.057694  246540 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:51:09.057771  246540 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:51:09.128886  246540 cri.go:76] found id: "8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b"
	I0814 09:51:09.128922  246540 cri.go:76] found id: "c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5"
	I0814 09:51:09.128931  246540 cri.go:76] found id: "c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f"
	I0814 09:51:09.128940  246540 cri.go:76] found id: "f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9"
	I0814 09:51:09.128948  246540 cri.go:76] found id: "df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799"
	I0814 09:51:09.128961  246540 cri.go:76] found id: "36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee"
	I0814 09:51:09.128972  246540 cri.go:76] found id: "a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83"
	I0814 09:51:09.128983  246540 cri.go:76] found id: "90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55"
	I0814 09:51:09.129002  246540 cri.go:76] found id: "1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	I0814 09:51:09.129019  246540 cri.go:76] found id: "efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af"
	I0814 09:51:09.129031  246540 cri.go:76] found id: ""
	I0814 09:51:09.129086  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:51:09.176706  246540 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","pid":5770,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0/rootfs","created":"2021-08-14T09:50:35.201711431Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-wjlqr_052286f2-685f-4520-8f5c-13e35b07e27e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b","pid":4591,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07ce24f2cc07a7fcb1d7192f0ac343c
46fe57d9d375042c4c6813520e981f76b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b/rootfs","created":"2021-08-14T09:50:11.389038492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210814094325-6746_c092de207f478e670a34ec7dddf3ef8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","pid":4592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256/rootfs","created":"2021-08-14T09:50:11.389038043Z","annotations":{"io.kubernetes.cri.container-type"
:"sandbox","io.kubernetes.cri.sandbox-id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210814094325-6746_1f6da9e3ccfdb57a1d4c1c871db4b810"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","pid":4606,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e/rootfs","created":"2021-08-14T09:50:11.389031567Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210814094325-6746_388d846d95a759cc1904825736b36
059"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","pid":4607,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a/rootfs","created":"2021-08-14T09:50:11.389030134Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210814094325-6746_88225f9c0820f13995585d22209c680d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee","pid":4742,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c7
10ee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee/rootfs","created":"2021-08-14T09:50:11.677032168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","pid":5999,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf/rootfs","created":"2021-08-14T09:50:36.525046685Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","io.kub
ernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_3f6d1385-66f0-49e9-a561-d557c138f7b6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","pid":5312,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46/rootfs","created":"2021-08-14T09:50:33.568969668Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xcshh_cbf58cc2-48cb-4eca-8d30-904694fbb480"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","pid":6308,"status":"running","bundle":"/run/container
d/io.containerd.runtime.v2.task/k8s.io/7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2/rootfs","created":"2021-08-14T09:50:38.25706149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-s5twx_a8bd4234-6263-4b5b-a621-d2337301a035"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b","pid":6105,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b/rootfs","created":"2021-08-1
4T09:50:36.761036257Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55","pid":4719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55/rootfs","created":"2021-08-14T09:50:11.612967265Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","pid":6209,"st
atus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c/rootfs","created":"2021-08-14T09:50:37.096960189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gmk5j_054dc08e-4bc7-4ae9-adf6-55f654ff6b86"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83","pid":4727,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2029d64830b8cc096ea505bda7b0334dd1ceda315758a
cb811abf9d3030dc83/rootfs","created":"2021-08-14T09:50:11.636957032Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f","pid":5801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f/rootfs","created":"2021-08-14T09:50:35.529016371Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c2051bc4ae8724836fece2ca06268cd
e848802ecd77d139e6b35c4d067dfc9b5","pid":5920,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5/rootfs","created":"2021-08-14T09:50:36.121048936Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","pid":5431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074/rootfs","created":"2021-08-14T09:50:34.30168
8528Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-kvv65_c0bc8515-5565-4fb1-a82d-d01bc090d641"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","pid":6081,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7/rootfs","created":"2021-08-14T09:50:36.697016505Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-5nrfw_5fdab3ce-8f70-4d45-8bf8-fa
d6c17b49a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799","pid":4735,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799/rootfs","created":"2021-08-14T09:50:11.653004381Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af","pid":6343,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd9db9208
5eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af/rootfs","created":"2021-08-14T09:50:38.601459158Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9","pid":5404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9/rootfs","created":"2021-08-14T09:50:33.841139858Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46"},"owner":"root"}]
	I0814 09:51:09.176974  246540 cri.go:113] list returned 20 containers
	I0814 09:51:09.176991  246540 cri.go:116] container: {ID:021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0 Status:running}
	I0814 09:51:09.177036  246540 cri.go:118] skipping 021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0 - not in ps
	I0814 09:51:09.177046  246540 cri.go:116] container: {ID:07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b Status:running}
	I0814 09:51:09.177056  246540 cri.go:118] skipping 07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b - not in ps
	I0814 09:51:09.177064  246540 cri.go:116] container: {ID:09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256 Status:running}
	I0814 09:51:09.177068  246540 cri.go:118] skipping 09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256 - not in ps
	I0814 09:51:09.177072  246540 cri.go:116] container: {ID:1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e Status:running}
	I0814 09:51:09.177079  246540 cri.go:118] skipping 1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e - not in ps
	I0814 09:51:09.177085  246540 cri.go:116] container: {ID:247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a Status:running}
	I0814 09:51:09.177094  246540 cri.go:118] skipping 247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a - not in ps
	I0814 09:51:09.177103  246540 cri.go:116] container: {ID:36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee Status:running}
	I0814 09:51:09.177109  246540 cri.go:116] container: {ID:3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf Status:running}
	I0814 09:51:09.177119  246540 cri.go:118] skipping 3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf - not in ps
	I0814 09:51:09.177128  246540 cri.go:116] container: {ID:5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46 Status:running}
	I0814 09:51:09.177137  246540 cri.go:118] skipping 5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46 - not in ps
	I0814 09:51:09.177146  246540 cri.go:116] container: {ID:7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2 Status:running}
	I0814 09:51:09.177156  246540 cri.go:118] skipping 7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2 - not in ps
	I0814 09:51:09.177167  246540 cri.go:116] container: {ID:8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b Status:running}
	I0814 09:51:09.177174  246540 cri.go:116] container: {ID:90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55 Status:running}
	I0814 09:51:09.177179  246540 cri.go:116] container: {ID:913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c Status:running}
	I0814 09:51:09.177191  246540 cri.go:118] skipping 913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c - not in ps
	I0814 09:51:09.177202  246540 cri.go:116] container: {ID:a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83 Status:running}
	I0814 09:51:09.177209  246540 cri.go:116] container: {ID:c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f Status:running}
	I0814 09:51:09.177219  246540 cri.go:116] container: {ID:c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5 Status:running}
	I0814 09:51:09.177235  246540 cri.go:116] container: {ID:c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074 Status:running}
	I0814 09:51:09.177246  246540 cri.go:118] skipping c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074 - not in ps
	I0814 09:51:09.177255  246540 cri.go:116] container: {ID:dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7 Status:running}
	I0814 09:51:09.177265  246540 cri.go:118] skipping dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7 - not in ps
	I0814 09:51:09.177272  246540 cri.go:116] container: {ID:df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799 Status:running}
	I0814 09:51:09.177277  246540 cri.go:116] container: {ID:efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af Status:running}
	I0814 09:51:09.177286  246540 cri.go:116] container: {ID:f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9 Status:running}
	I0814 09:51:09.177341  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee
	I0814 09:51:09.191150  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee 8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b
	I0814 09:51:09.203668  246540 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee 8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:51:09Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:51:09.480011  246540 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:09.489519  246540 pause.go:50] kubelet running: false
	I0814 09:51:09.489574  246540 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:51:09.606192  246540 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:51:09.606285  246540 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:51:09.682638  246540 cri.go:76] found id: "8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b"
	I0814 09:51:09.682662  246540 cri.go:76] found id: "c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5"
	I0814 09:51:09.682669  246540 cri.go:76] found id: "c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f"
	I0814 09:51:09.682675  246540 cri.go:76] found id: "f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9"
	I0814 09:51:09.682680  246540 cri.go:76] found id: "df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799"
	I0814 09:51:09.682686  246540 cri.go:76] found id: "36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee"
	I0814 09:51:09.682691  246540 cri.go:76] found id: "a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83"
	I0814 09:51:09.682696  246540 cri.go:76] found id: "90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55"
	I0814 09:51:09.682702  246540 cri.go:76] found id: "1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	I0814 09:51:09.682713  246540 cri.go:76] found id: "efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af"
	I0814 09:51:09.682729  246540 cri.go:76] found id: ""
	I0814 09:51:09.682776  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:51:09.732450  246540 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","pid":5770,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0/rootfs","created":"2021-08-14T09:50:35.201711431Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-wjlqr_052286f2-685f-4520-8f5c-13e35b07e27e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b","pid":4591,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07ce24f2cc07a7fcb1d7192f0ac343c
46fe57d9d375042c4c6813520e981f76b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b/rootfs","created":"2021-08-14T09:50:11.389038492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210814094325-6746_c092de207f478e670a34ec7dddf3ef8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","pid":4592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256/rootfs","created":"2021-08-14T09:50:11.389038043Z","annotations":{"io.kubernetes.cri.container-type"
:"sandbox","io.kubernetes.cri.sandbox-id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210814094325-6746_1f6da9e3ccfdb57a1d4c1c871db4b810"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","pid":4606,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e/rootfs","created":"2021-08-14T09:50:11.389031567Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210814094325-6746_388d846d95a759cc1904825736b36
059"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","pid":4607,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a/rootfs","created":"2021-08-14T09:50:11.389030134Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210814094325-6746_88225f9c0820f13995585d22209c680d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee","pid":4742,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c71
0ee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee/rootfs","created":"2021-08-14T09:50:11.677032168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","pid":5999,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf/rootfs","created":"2021-08-14T09:50:36.525046685Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","io.kube
rnetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_3f6d1385-66f0-49e9-a561-d557c138f7b6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","pid":5312,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46/rootfs","created":"2021-08-14T09:50:33.568969668Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xcshh_cbf58cc2-48cb-4eca-8d30-904694fbb480"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","pid":6308,"status":"running","bundle":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2/rootfs","created":"2021-08-14T09:50:38.25706149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-s5twx_a8bd4234-6263-4b5b-a621-d2337301a035"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b","pid":6105,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b/rootfs","created":"2021-08-14
T09:50:36.761036257Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55","pid":4719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55/rootfs","created":"2021-08-14T09:50:11.612967265Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","pid":6209,"sta
tus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c/rootfs","created":"2021-08-14T09:50:37.096960189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gmk5j_054dc08e-4bc7-4ae9-adf6-55f654ff6b86"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83","pid":4727,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2029d64830b8cc096ea505bda7b0334dd1ceda315758ac
b811abf9d3030dc83/rootfs","created":"2021-08-14T09:50:11.636957032Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f","pid":5801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f/rootfs","created":"2021-08-14T09:50:35.529016371Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c2051bc4ae8724836fece2ca06268cde
848802ecd77d139e6b35c4d067dfc9b5","pid":5920,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5/rootfs","created":"2021-08-14T09:50:36.121048936Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","pid":5431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074/rootfs","created":"2021-08-14T09:50:34.301688
528Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-kvv65_c0bc8515-5565-4fb1-a82d-d01bc090d641"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","pid":6081,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7/rootfs","created":"2021-08-14T09:50:36.697016505Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-5nrfw_5fdab3ce-8f70-4d45-8bf8-fad
6c17b49a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799","pid":4735,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799/rootfs","created":"2021-08-14T09:50:11.653004381Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af","pid":6343,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd9db92085
eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af/rootfs","created":"2021-08-14T09:50:38.601459158Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9","pid":5404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9/rootfs","created":"2021-08-14T09:50:33.841139858Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46"},"owner":"root"}]
	I0814 09:51:09.732724  246540 cri.go:113] list returned 20 containers
	I0814 09:51:09.732738  246540 cri.go:116] container: {ID:021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0 Status:running}
	I0814 09:51:09.732752  246540 cri.go:118] skipping 021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0 - not in ps
	I0814 09:51:09.732760  246540 cri.go:116] container: {ID:07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b Status:running}
	I0814 09:51:09.732767  246540 cri.go:118] skipping 07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b - not in ps
	I0814 09:51:09.732776  246540 cri.go:116] container: {ID:09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256 Status:running}
	I0814 09:51:09.732783  246540 cri.go:118] skipping 09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256 - not in ps
	I0814 09:51:09.732789  246540 cri.go:116] container: {ID:1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e Status:running}
	I0814 09:51:09.732830  246540 cri.go:118] skipping 1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e - not in ps
	I0814 09:51:09.732840  246540 cri.go:116] container: {ID:247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a Status:running}
	I0814 09:51:09.732847  246540 cri.go:118] skipping 247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a - not in ps
	I0814 09:51:09.732853  246540 cri.go:116] container: {ID:36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee Status:paused}
	I0814 09:51:09.732862  246540 cri.go:122] skipping {36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee paused}: state = "paused", want "running"
	I0814 09:51:09.732878  246540 cri.go:116] container: {ID:3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf Status:running}
	I0814 09:51:09.732885  246540 cri.go:118] skipping 3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf - not in ps
	I0814 09:51:09.732899  246540 cri.go:116] container: {ID:5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46 Status:running}
	I0814 09:51:09.732910  246540 cri.go:118] skipping 5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46 - not in ps
	I0814 09:51:09.732915  246540 cri.go:116] container: {ID:7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2 Status:running}
	I0814 09:51:09.732925  246540 cri.go:118] skipping 7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2 - not in ps
	I0814 09:51:09.732931  246540 cri.go:116] container: {ID:8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b Status:running}
	I0814 09:51:09.732941  246540 cri.go:116] container: {ID:90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55 Status:running}
	I0814 09:51:09.732948  246540 cri.go:116] container: {ID:913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c Status:running}
	I0814 09:51:09.732955  246540 cri.go:118] skipping 913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c - not in ps
	I0814 09:51:09.732964  246540 cri.go:116] container: {ID:a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83 Status:running}
	I0814 09:51:09.732971  246540 cri.go:116] container: {ID:c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f Status:running}
	I0814 09:51:09.732980  246540 cri.go:116] container: {ID:c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5 Status:running}
	I0814 09:51:09.732987  246540 cri.go:116] container: {ID:c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074 Status:running}
	I0814 09:51:09.732997  246540 cri.go:118] skipping c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074 - not in ps
	I0814 09:51:09.733003  246540 cri.go:116] container: {ID:dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7 Status:running}
	I0814 09:51:09.733014  246540 cri.go:118] skipping dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7 - not in ps
	I0814 09:51:09.733019  246540 cri.go:116] container: {ID:df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799 Status:running}
	I0814 09:51:09.733026  246540 cri.go:116] container: {ID:efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af Status:running}
	I0814 09:51:09.733040  246540 cri.go:116] container: {ID:f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9 Status:running}
	I0814 09:51:09.733088  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b
	I0814 09:51:09.749973  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b 90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55
	I0814 09:51:09.763341  246540 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b 90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:51:09Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:51:10.304018  246540 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:10.314518  246540 pause.go:50] kubelet running: false
	I0814 09:51:10.314575  246540 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:51:10.429373  246540 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:51:10.429452  246540 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:51:10.498444  246540 cri.go:76] found id: "8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b"
	I0814 09:51:10.498467  246540 cri.go:76] found id: "c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5"
	I0814 09:51:10.498472  246540 cri.go:76] found id: "c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f"
	I0814 09:51:10.498476  246540 cri.go:76] found id: "f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9"
	I0814 09:51:10.498479  246540 cri.go:76] found id: "df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799"
	I0814 09:51:10.498483  246540 cri.go:76] found id: "36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee"
	I0814 09:51:10.498487  246540 cri.go:76] found id: "a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83"
	I0814 09:51:10.498490  246540 cri.go:76] found id: "90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55"
	I0814 09:51:10.498494  246540 cri.go:76] found id: "1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	I0814 09:51:10.498500  246540 cri.go:76] found id: "efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af"
	I0814 09:51:10.498507  246540 cri.go:76] found id: ""
	I0814 09:51:10.498557  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:51:10.545044  246540 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","pid":5770,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0/rootfs","created":"2021-08-14T09:50:35.201711431Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-wjlqr_052286f2-685f-4520-8f5c-13e35b07e27e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b","pid":4591,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07ce24f2cc07a7fcb1d7192f0ac343c
46fe57d9d375042c4c6813520e981f76b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b/rootfs","created":"2021-08-14T09:50:11.389038492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210814094325-6746_c092de207f478e670a34ec7dddf3ef8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","pid":4592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256/rootfs","created":"2021-08-14T09:50:11.389038043Z","annotations":{"io.kubernetes.cri.container-type"
:"sandbox","io.kubernetes.cri.sandbox-id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210814094325-6746_1f6da9e3ccfdb57a1d4c1c871db4b810"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","pid":4606,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e/rootfs","created":"2021-08-14T09:50:11.389031567Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210814094325-6746_388d846d95a759cc1904825736b36
059"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","pid":4607,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a/rootfs","created":"2021-08-14T09:50:11.389030134Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210814094325-6746_88225f9c0820f13995585d22209c680d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee","pid":4742,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c71
0ee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee/rootfs","created":"2021-08-14T09:50:11.677032168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","pid":5999,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf/rootfs","created":"2021-08-14T09:50:36.525046685Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf","io.kube
rnetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_3f6d1385-66f0-49e9-a561-d557c138f7b6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","pid":5312,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46/rootfs","created":"2021-08-14T09:50:33.568969668Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xcshh_cbf58cc2-48cb-4eca-8d30-904694fbb480"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","pid":6308,"status":"running","bundle":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2/rootfs","created":"2021-08-14T09:50:38.25706149Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-s5twx_a8bd4234-6263-4b5b-a621-d2337301a035"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b","pid":6105,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b/rootfs","created":"2021-08-14T
09:50:36.761036257Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55","pid":4719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55/rootfs","created":"2021-08-14T09:50:11.612967265Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","pid":6209,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c/rootfs","created":"2021-08-14T09:50:37.096960189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gmk5j_054dc08e-4bc7-4ae9-adf6-55f654ff6b86"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83","pid":4727,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb
811abf9d3030dc83/rootfs","created":"2021-08-14T09:50:11.636957032Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f","pid":5801,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f/rootfs","created":"2021-08-14T09:50:35.529016371Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c2051bc4ae8724836fece2ca06268cde8
48802ecd77d139e6b35c4d067dfc9b5","pid":5920,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5/rootfs","created":"2021-08-14T09:50:36.121048936Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","pid":5431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074/rootfs","created":"2021-08-14T09:50:34.3016885
28Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-kvv65_c0bc8515-5565-4fb1-a82d-d01bc090d641"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","pid":6081,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7/rootfs","created":"2021-08-14T09:50:36.697016505Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-5nrfw_5fdab3ce-8f70-4d45-8bf8-fad6
c17b49a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799","pid":4735,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799/rootfs","created":"2021-08-14T09:50:11.653004381Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af","pid":6343,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd9db92085e
b26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af/rootfs","created":"2021-08-14T09:50:38.601459158Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9","pid":5404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9/rootfs","created":"2021-08-14T09:50:33.841139858Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46"},"owner":"root"}]
	I0814 09:51:10.545346  246540 cri.go:113] list returned 20 containers
	I0814 09:51:10.545361  246540 cri.go:116] container: {ID:021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0 Status:running}
	I0814 09:51:10.545374  246540 cri.go:118] skipping 021199ccb79c5025143fc65e338e68e0008686d9ac132456bf7a7c2bb3bc0ec0 - not in ps
	I0814 09:51:10.545382  246540 cri.go:116] container: {ID:07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b Status:running}
	I0814 09:51:10.545387  246540 cri.go:118] skipping 07ce24f2cc07a7fcb1d7192f0ac343c46fe57d9d375042c4c6813520e981f76b - not in ps
	I0814 09:51:10.545390  246540 cri.go:116] container: {ID:09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256 Status:running}
	I0814 09:51:10.545408  246540 cri.go:118] skipping 09addeb11662d85b2e89366d706603a96acb6d3c8550f5c7647c1254f1dd7256 - not in ps
	I0814 09:51:10.545414  246540 cri.go:116] container: {ID:1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e Status:running}
	I0814 09:51:10.545422  246540 cri.go:118] skipping 1c884595728148c98848fee2132a8e9a7181a986e1ef996ba96e1f6ad1c7709e - not in ps
	I0814 09:51:10.545427  246540 cri.go:116] container: {ID:247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a Status:running}
	I0814 09:51:10.545434  246540 cri.go:118] skipping 247a0e8677d0441bc5fb446f8109a69fc23d6c8c70440b61aea1309b3526d78a - not in ps
	I0814 09:51:10.545439  246540 cri.go:116] container: {ID:36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee Status:paused}
	I0814 09:51:10.545446  246540 cri.go:122] skipping {36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee paused}: state = "paused", want "running"
	I0814 09:51:10.545463  246540 cri.go:116] container: {ID:3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf Status:running}
	I0814 09:51:10.545469  246540 cri.go:118] skipping 3cad7404cb4f261940a2d54b6f343238068880725276ee943ffdca7c1c843bbf - not in ps
	I0814 09:51:10.545474  246540 cri.go:116] container: {ID:5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46 Status:running}
	I0814 09:51:10.545481  246540 cri.go:118] skipping 5d392d166e0203fc103424eaa10b6ec8949e10bfa93c5c148a64d5a8a3a21b46 - not in ps
	I0814 09:51:10.545487  246540 cri.go:116] container: {ID:7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2 Status:running}
	I0814 09:51:10.545493  246540 cri.go:118] skipping 7e54e81316330182076072ed9e626924f302a69618e80d4879214331cf338cf2 - not in ps
	I0814 09:51:10.545498  246540 cri.go:116] container: {ID:8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b Status:paused}
	I0814 09:51:10.545505  246540 cri.go:122] skipping {8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b paused}: state = "paused", want "running"
	I0814 09:51:10.545512  246540 cri.go:116] container: {ID:90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55 Status:running}
	I0814 09:51:10.545518  246540 cri.go:116] container: {ID:913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c Status:running}
	I0814 09:51:10.545524  246540 cri.go:118] skipping 913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c - not in ps
	I0814 09:51:10.545529  246540 cri.go:116] container: {ID:a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83 Status:running}
	I0814 09:51:10.545535  246540 cri.go:116] container: {ID:c145e33f75c99bc99c0f5db417a1193d3c239d44f1a18093ed38cab9bdb08d4f Status:running}
	I0814 09:51:10.545542  246540 cri.go:116] container: {ID:c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5 Status:running}
	I0814 09:51:10.545548  246540 cri.go:116] container: {ID:c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074 Status:running}
	I0814 09:51:10.545554  246540 cri.go:118] skipping c9bd7ea8434fc6af3f3380b07a3f6f091638337b0615373d7f31fc199afce074 - not in ps
	I0814 09:51:10.545559  246540 cri.go:116] container: {ID:dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7 Status:running}
	I0814 09:51:10.545565  246540 cri.go:118] skipping dc458848d443c73957309e48c4c80c25c8b942fd42efbc8d2dad444263d8e0d7 - not in ps
	I0814 09:51:10.545570  246540 cri.go:116] container: {ID:df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799 Status:running}
	I0814 09:51:10.545576  246540 cri.go:116] container: {ID:efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af Status:running}
	I0814 09:51:10.545585  246540 cri.go:116] container: {ID:f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9 Status:running}
	I0814 09:51:10.545636  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55
	I0814 09:51:10.561742  246540 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55 a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83
	I0814 09:51:10.576462  246540 out.go:177] 
	W0814 09:51:10.576580  246540 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55 a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:51:10Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55 a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:51:10Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0814 09:51:10.576596  246540 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0814 09:51:10.579080  246540 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0814 09:51:10.580315  246540 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210814094325-6746 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210814094325-6746
helpers_test.go:236: (dbg) docker inspect embed-certs-20210814094325-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576",
	        "Created": "2021-08-14T09:43:27.289846985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219779,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:45:14.416227785Z",
	            "FinishedAt": "2021-08-14T09:45:12.088163109Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hosts",
	        "LogPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576-json.log",
	        "Name": "/embed-certs-20210814094325-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210814094325-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210814094325-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210814094325-6746",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210814094325-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210814094325-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a6933adbbd3ce722a675e5adeafc189199fe1d8fada7eebf787d37f915e239a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32948"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1a6933adbbd3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210814094325-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d2385af2cb05"
	                    ],
	                    "NetworkID": "dbc6f9acad495850f4b0b885d051bfbd2cce05a9032571d93062419b0fbb36d2",
	                    "EndpointID": "8a463f9704f74ef43e52668f81108caad669353d43761c271d2d4d574c959212",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746: exit status 2 (14.490901658s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:51:25.119969  247026 status.go:422] Error apiserver status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210814094325-6746 logs -n 25
E0814 09:51:35.243119    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210814094325-6746 logs -n 25: exit status 110 (22.572938992s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:17 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:41:38 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                           |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                           |         |         |                               |                               |
	|         | --keep-context=false                              |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                           |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:18 UTC | Sat, 14 Aug 2021 09:43:19 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:20 UTC | Sat, 14 Aug 2021 09:43:21 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:21 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                           |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                   | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:50 UTC | Sat, 14 Aug 2021 09:44:51 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:51 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:48:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:48:45 UTC | Sat, 14 Aug 2021 09:48:45 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:34 UTC | Sat, 14 Aug 2021 09:50:38 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210814094108-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:38 UTC | Sat, 14 Aug 2021 09:50:39 UTC |
	|         | no-preload-20210814094108-6746                    |                                           |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210814095039-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:39 UTC | Sat, 14 Aug 2021 09:50:40 UTC |
	|         | disable-driver-mounts-20210814095039-6746         |                                           |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:50:56 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                           |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                   | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:06 UTC | Sat, 14 Aug 2021 09:51:07 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210814094325-6746           | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:08 UTC | Sat, 14 Aug 2021 09:51:08 UTC |
	|         | embed-certs-20210814094325-6746                   |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:50:40
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:50:40.078160  242948 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:50:40.078244  242948 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:50:40.078254  242948 out.go:311] Setting ErrFile to fd 2...
	I0814 09:50:40.078258  242948 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:50:40.078366  242948 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:50:40.078628  242948 out.go:305] Setting JSON to false
	I0814 09:50:40.119352  242948 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5602,"bootTime":1628929038,"procs":276,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:50:40.119448  242948 start.go:121] virtualization: kvm guest
	I0814 09:50:40.122500  242948 out.go:177] * [default-k8s-different-port-20210814095040-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:50:40.124210  242948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:50:40.122699  242948 notify.go:169] Checking for updates...
	I0814 09:50:40.125676  242948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:50:40.127206  242948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:50:40.128678  242948 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:50:40.129277  242948 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:50:40.129398  242948 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:50:40.129490  242948 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:50:40.129531  242948 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:50:40.189038  242948 docker.go:132] docker version: linux-19.03.15
	I0814 09:50:40.189164  242948 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:50:40.289687  242948 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:50:40.235277528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:50:40.289786  242948 docker.go:244] overlay module found
	I0814 09:50:40.291517  242948 out.go:177] * Using the docker driver based on user configuration
	I0814 09:50:40.291541  242948 start.go:278] selected driver: docker
	I0814 09:50:40.291546  242948 start.go:751] validating driver "docker" against <nil>
	I0814 09:50:40.291562  242948 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:50:40.291608  242948 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:50:40.291627  242948 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:50:40.292971  242948 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:50:40.293780  242948 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:50:40.384600  242948 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:50:40.338311012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:50:40.384710  242948 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:50:40.384920  242948 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:50:40.384946  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:50:40.384954  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:50:40.384964  242948 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:50:40.384974  242948 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:50:40.384981  242948 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:50:40.384991  242948 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:50:40.387054  242948 out.go:177] * Starting control plane node default-k8s-different-port-20210814095040-6746 in cluster default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.387087  242948 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:50:40.388430  242948 out.go:177] * Pulling base image ...
	I0814 09:50:40.388458  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:40.388489  242948 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:50:40.388500  242948 cache.go:56] Caching tarball of preloaded images
	I0814 09:50:40.388547  242948 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:50:40.388667  242948 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:50:40.388684  242948 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:50:40.388818  242948 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json ...
	I0814 09:50:40.388847  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json: {Name:mk37096ce7d1c408ab2119b9d1016f0ec54511d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:40.477442  242948 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:50:40.477474  242948 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:50:40.477489  242948 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:50:40.477530  242948 start.go:313] acquiring machines lock for default-k8s-different-port-20210814095040-6746: {Name:mke7f558db837977766a2f1aff9770a5c1ff83a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:50:40.477640  242948 start.go:317] acquired machines lock for "default-k8s-different-port-20210814095040-6746" in 92.564µs
	I0814 09:50:40.477663  242948 start.go:89] Provisioning new machine with config: &{Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040
-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:50:40.477786  242948 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:50:37.722771  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:39.723935  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:41.724562  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:40.480990  242948 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0814 09:50:40.481210  242948 start.go:160] libmachine.API.Create for "default-k8s-different-port-20210814095040-6746" (driver="docker")
	I0814 09:50:40.481241  242948 client.go:168] LocalClient.Create starting
	I0814 09:50:40.481338  242948 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:50:40.481371  242948 main.go:130] libmachine: Decoding PEM data...
	I0814 09:50:40.481389  242948 main.go:130] libmachine: Parsing certificate...
	I0814 09:50:40.481488  242948 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:50:40.481510  242948 main.go:130] libmachine: Decoding PEM data...
	I0814 09:50:40.481528  242948 main.go:130] libmachine: Parsing certificate...
	I0814 09:50:40.481849  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:50:40.525737  242948 cli_runner.go:162] docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:50:40.525811  242948 network_create.go:255] running [docker network inspect default-k8s-different-port-20210814095040-6746] to gather additional debugging logs...
	I0814 09:50:40.525834  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746
	W0814 09:50:40.571336  242948 cli_runner.go:162] docker network inspect default-k8s-different-port-20210814095040-6746 returned with exit code 1
	I0814 09:50:40.571370  242948 network_create.go:258] error running [docker network inspect default-k8s-different-port-20210814095040-6746]: docker network inspect default-k8s-different-port-20210814095040-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.571397  242948 network_create.go:260] output of [docker network inspect default-k8s-different-port-20210814095040-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20210814095040-6746
	
	** /stderr **
	I0814 09:50:40.571446  242948 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:50:40.614259  242948 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000114740] misses:0}
	I0814 09:50:40.614311  242948 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:50:40.614327  242948 network_create.go:106] attempt to create docker network default-k8s-different-port-20210814095040-6746 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0814 09:50:40.614367  242948 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.694116  242948 network_create.go:90] docker network default-k8s-different-port-20210814095040-6746 192.168.49.0/24 created
	I0814 09:50:40.694168  242948 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20210814095040-6746" container
	I0814 09:50:40.694232  242948 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:50:40.739929  242948 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20210814095040-6746 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:50:40.779996  242948 oci.go:102] Successfully created a docker volume default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.780078  242948 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20210814095040-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --entrypoint /usr/bin/test -v default-k8s-different-port-20210814095040-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:50:41.548292  242948 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20210814095040-6746
	W0814 09:50:41.548348  242948 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:50:41.548361  242948 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:50:41.548375  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:41.548406  242948 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:50:41.548418  242948 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:50:41.548477  242948 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210814095040-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:50:41.636582  242948 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20210814095040-6746 --name default-k8s-different-port-20210814095040-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --network default-k8s-different-port-20210814095040-6746 --ip 192.168.49.2 --volume default-k8s-different-port-20210814095040-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:50:42.144372  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Running}}
	I0814 09:50:42.192564  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:42.243507  242948 cli_runner.go:115] Run: docker exec default-k8s-different-port-20210814095040-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:50:42.382169  242948 oci.go:278] the created container "default-k8s-different-port-20210814095040-6746" has a running status.
	I0814 09:50:42.382207  242948 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa...
	I0814 09:50:42.445995  242948 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:50:42.839357  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:42.883219  242948 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:50:42.883247  242948 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210814095040-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:50:44.223377  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:46.723426  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:45.601178  242948 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210814095040-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.052608109s)
	I0814 09:50:45.601212  242948 kic.go:188] duration metric: took 4.052804 seconds to extract preloaded images to volume
	I0814 09:50:45.601281  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:45.639926  242948 machine.go:88] provisioning docker machine ...
	I0814 09:50:45.639958  242948 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210814095040-6746"
	I0814 09:50:45.640004  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:45.677111  242948 main.go:130] libmachine: Using SSH client type: native
	I0814 09:50:45.677287  242948 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I0814 09:50:45.677302  242948 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210814095040-6746 && echo "default-k8s-different-port-20210814095040-6746" | sudo tee /etc/hostname
	I0814 09:50:45.811627  242948 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210814095040-6746
	
	I0814 09:50:45.811696  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:45.851031  242948 main.go:130] libmachine: Using SSH client type: native
	I0814 09:50:45.851173  242948 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I0814 09:50:45.851198  242948 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210814095040-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210814095040-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210814095040-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:50:45.971942  242948 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:50:45.971970  242948 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:50:45.972021  242948 ubuntu.go:177] setting up certificates
	I0814 09:50:45.972032  242948 provision.go:83] configureAuth start
	I0814 09:50:45.972081  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.010084  242948 provision.go:138] copyHostCerts
	I0814 09:50:46.010154  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:50:46.010173  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:50:46.010236  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:50:46.010318  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:50:46.010330  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:50:46.010360  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:50:46.010420  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:50:46.010429  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:50:46.010454  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:50:46.010510  242948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210814095040-6746 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210814095040-6746]
	I0814 09:50:46.128382  242948 provision.go:172] copyRemoteCerts
	I0814 09:50:46.128444  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:50:46.128496  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.168879  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.259375  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:50:46.275048  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0814 09:50:46.289966  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:50:46.304803  242948 provision.go:86] duration metric: configureAuth took 332.753132ms
	I0814 09:50:46.304824  242948 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:50:46.304953  242948 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:50:46.304963  242948 machine.go:91] provisioned docker machine in 665.019762ms
	I0814 09:50:46.304997  242948 client.go:171] LocalClient.Create took 5.823722197s
	I0814 09:50:46.305019  242948 start.go:168] duration metric: libmachine.API.Create for "default-k8s-different-port-20210814095040-6746" took 5.823809022s
	I0814 09:50:46.305031  242948 start.go:267] post-start starting for "default-k8s-different-port-20210814095040-6746" (driver="docker")
	I0814 09:50:46.305037  242948 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:50:46.305081  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:50:46.305111  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.345433  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.439381  242948 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:50:46.441947  242948 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:50:46.441969  242948 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:50:46.441986  242948 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:50:46.441995  242948 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:50:46.442005  242948 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:50:46.442045  242948 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:50:46.442142  242948 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:50:46.442245  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:50:46.448155  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:50:46.463470  242948 start.go:270] post-start completed in 158.429661ms
	I0814 09:50:46.463755  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.503225  242948 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json ...
	I0814 09:50:46.503476  242948 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:50:46.503519  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.540563  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.625051  242948 start.go:129] duration metric: createHost completed in 6.147253606s
	I0814 09:50:46.625076  242948 start.go:80] releasing machines lock for "default-k8s-different-port-20210814095040-6746", held for 6.147423912s
	I0814 09:50:46.625156  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.664584  242948 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:50:46.664645  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.664652  242948 ssh_runner.go:149] Run: systemctl --version
	I0814 09:50:46.664697  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.707048  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.707255  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.821908  242948 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:50:46.831031  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:50:46.839119  242948 docker.go:153] disabling docker service ...
	I0814 09:50:46.839168  242948 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:50:46.853257  242948 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:50:46.861067  242948 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:50:46.923543  242948 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:50:46.978774  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:50:46.986852  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:50:46.997957  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:50:47.009652  242948 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:50:47.015207  242948 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:50:47.015245  242948 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:50:47.021672  242948 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:50:47.027332  242948 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:50:47.082686  242948 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:50:47.143652  242948 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:50:47.143716  242948 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:50:47.146808  242948 start.go:413] Will wait 60s for crictl version
	I0814 09:50:47.146863  242948 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:50:47.169179  242948 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:50:47Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:50:49.222994  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:51.223160  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:53.223991  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:54.720501  219213 pod_ready.go:97] error getting pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-c7vfk" not found
	I0814 09:50:54.720529  219213 pod_ready.go:81] duration metric: took 21.007916602s waiting for pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace to be "Ready" ...
	E0814 09:50:54.720541  219213 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-c7vfk" not found
	I0814 09:50:54.720550  219213 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.724877  219213 pod_ready.go:92] pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.724893  219213 pod_ready.go:81] duration metric: took 4.331809ms waiting for pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.724903  219213 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.728530  219213 pod_ready.go:92] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.728548  219213 pod_ready.go:81] duration metric: took 3.638427ms waiting for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.728567  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.732170  219213 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.732186  219213 pod_ready.go:81] duration metric: took 3.612156ms waiting for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.732196  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.735668  219213 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.735682  219213 pod_ready.go:81] duration metric: took 3.480378ms waiting for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.735691  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xcshh" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.920884  219213 pod_ready.go:92] pod "kube-proxy-xcshh" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.920903  219213 pod_ready.go:81] duration metric: took 185.206559ms waiting for pod "kube-proxy-xcshh" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.920913  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:55.321495  219213 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:55.321518  219213 pod_ready.go:81] duration metric: took 400.598171ms waiting for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:55.321529  219213 pod_ready.go:38] duration metric: took 21.625428997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:50:55.321547  219213 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:50:55.321592  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:50:55.343154  219213 api_server.go:70] duration metric: took 21.759811987s to wait for apiserver process to appear ...
	I0814 09:50:55.343176  219213 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:50:55.343186  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:50:55.347349  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:50:55.348107  219213 api_server.go:139] control plane version: v1.21.3
	I0814 09:50:55.348127  219213 api_server.go:129] duration metric: took 4.944829ms to wait for apiserver health ...
	I0814 09:50:55.348136  219213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:50:55.523276  219213 system_pods.go:59] 9 kube-system pods found
	I0814 09:50:55.523298  219213 system_pods.go:61] "coredns-558bd4d5db-wjlqr" [052286f2-685f-4520-8f5c-13e35b07e27e] Running
	I0814 09:50:55.523303  219213 system_pods.go:61] "etcd-embed-certs-20210814094325-6746" [62f460fe-d11d-4e50-a549-f9a153888a5d] Running
	I0814 09:50:55.523306  219213 system_pods.go:61] "kindnet-kvv65" [c0bc8515-5565-4fb1-a82d-d01bc090d641] Running
	I0814 09:50:55.523311  219213 system_pods.go:61] "kube-apiserver-embed-certs-20210814094325-6746" [04913668-df62-4d8e-8166-fe1aaf7ba56b] Running
	I0814 09:50:55.523316  219213 system_pods.go:61] "kube-controller-manager-embed-certs-20210814094325-6746" [57b659e5-19e6-415a-995a-3e92b39b5a41] Running
	I0814 09:50:55.523319  219213 system_pods.go:61] "kube-proxy-xcshh" [cbf58cc2-48cb-4eca-8d30-904694fbb480] Running
	I0814 09:50:55.523323  219213 system_pods.go:61] "kube-scheduler-embed-certs-20210814094325-6746" [ccdd236c-694a-4805-a6cd-7fa58b99395e] Running
	I0814 09:50:55.523332  219213 system_pods.go:61] "metrics-server-7c784ccb57-5nrfw" [5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:50:55.523338  219213 system_pods.go:61] "storage-provisioner" [3f6d1385-66f0-49e9-a561-d557c138f7b6] Running
	I0814 09:50:55.523344  219213 system_pods.go:74] duration metric: took 175.203116ms to wait for pod list to return data ...
	I0814 09:50:55.523353  219213 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:50:55.721303  219213 default_sa.go:45] found service account: "default"
	I0814 09:50:55.721329  219213 default_sa.go:55] duration metric: took 197.969622ms for default service account to be created ...
	I0814 09:50:55.721339  219213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:50:55.923597  219213 system_pods.go:86] 9 kube-system pods found
	I0814 09:50:55.923624  219213 system_pods.go:89] "coredns-558bd4d5db-wjlqr" [052286f2-685f-4520-8f5c-13e35b07e27e] Running
	I0814 09:50:55.923629  219213 system_pods.go:89] "etcd-embed-certs-20210814094325-6746" [62f460fe-d11d-4e50-a549-f9a153888a5d] Running
	I0814 09:50:55.923634  219213 system_pods.go:89] "kindnet-kvv65" [c0bc8515-5565-4fb1-a82d-d01bc090d641] Running
	I0814 09:50:55.923639  219213 system_pods.go:89] "kube-apiserver-embed-certs-20210814094325-6746" [04913668-df62-4d8e-8166-fe1aaf7ba56b] Running
	I0814 09:50:55.923644  219213 system_pods.go:89] "kube-controller-manager-embed-certs-20210814094325-6746" [57b659e5-19e6-415a-995a-3e92b39b5a41] Running
	I0814 09:50:55.923651  219213 system_pods.go:89] "kube-proxy-xcshh" [cbf58cc2-48cb-4eca-8d30-904694fbb480] Running
	I0814 09:50:55.923655  219213 system_pods.go:89] "kube-scheduler-embed-certs-20210814094325-6746" [ccdd236c-694a-4805-a6cd-7fa58b99395e] Running
	I0814 09:50:55.923663  219213 system_pods.go:89] "metrics-server-7c784ccb57-5nrfw" [5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:50:55.923670  219213 system_pods.go:89] "storage-provisioner" [3f6d1385-66f0-49e9-a561-d557c138f7b6] Running
	I0814 09:50:55.923677  219213 system_pods.go:126] duration metric: took 202.332969ms to wait for k8s-apps to be running ...
	I0814 09:50:55.923687  219213 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:50:55.923726  219213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:50:55.932502  219213 system_svc.go:56] duration metric: took 8.810307ms WaitForService to wait for kubelet.
	I0814 09:50:55.932525  219213 kubeadm.go:547] duration metric: took 22.349186518s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:50:55.932552  219213 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:50:56.122070  219213 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:50:56.122094  219213 node_conditions.go:123] node cpu capacity is 8
	I0814 09:50:56.122107  219213 node_conditions.go:105] duration metric: took 189.549407ms to run NodePressure ...
	I0814 09:50:56.122116  219213 start.go:231] waiting for startup goroutines ...
	I0814 09:50:56.165685  219213 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:50:56.167880  219213 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210814094325-6746" cluster and "default" namespace by default
	I0814 09:50:58.219059  242948 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:50:58.309835  242948 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:50:58.309907  242948 ssh_runner.go:149] Run: containerd --version
	I0814 09:50:58.330960  242948 ssh_runner.go:149] Run: containerd --version
	I0814 09:50:58.353208  242948 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:50:58.353286  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:50:58.391168  242948 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0814 09:50:58.394266  242948 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:50:58.403013  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:58.403075  242948 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:50:58.424249  242948 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:50:58.424264  242948 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:50:58.424296  242948 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:50:58.444133  242948 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:50:58.444150  242948 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:50:58.444182  242948 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:50:58.464037  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:50:58.464053  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:50:58.464062  242948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:50:58.464075  242948 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210814095040-6746 NodeName:default-k8s-different-port-20210814095040-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:50:58.464192  242948 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210814095040-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:50:58.464276  242948 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210814095040-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0814 09:50:58.464314  242948 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:50:58.470390  242948 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:50:58.470444  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:50:58.476398  242948 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0814 09:50:58.487492  242948 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:50:58.498448  242948 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0814 09:50:58.509594  242948 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:50:58.512068  242948 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:50:58.520004  242948 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746 for IP: 192.168.49.2
	I0814 09:50:58.520042  242948 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:50:58.520057  242948 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:50:58.520106  242948 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key
	I0814 09:50:58.520115  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt with IP's: []
	I0814 09:50:58.605811  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt ...
	I0814 09:50:58.605832  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: {Name:mkacaae754c3f3d8a12af248e60d4f2dfeb1fcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.605983  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key ...
	I0814 09:50:58.605995  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key: {Name:mkc908febb624f8dcae4593839bc3cdd86a1ad31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.606077  242948 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2
	I0814 09:50:58.606087  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:50:58.792141  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 ...
	I0814 09:50:58.792164  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2: {Name:mk1ce321d1a9a1e324dde7b9a016555ddd6031d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.792303  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2 ...
	I0814 09:50:58.792317  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2: {Name:mk8ca5617228b674c440c829a6a0ed6ba7adf225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.792390  242948 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt
	I0814 09:50:58.792489  242948 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key
	I0814 09:50:58.792543  242948 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key
	I0814 09:50:58.792551  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt with IP's: []
	I0814 09:50:58.996340  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt ...
	I0814 09:50:58.996371  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt: {Name:mk55a688d6c41aa245f7d2d45cd1b092fbfe314a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.996534  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key ...
	I0814 09:50:58.996546  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key: {Name:mk240bac2833d6f959e53ffe7865c747fc43bc7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.996701  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:50:58.996736  242948 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:50:58.996746  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:50:58.996770  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:50:58.996833  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:50:58.996856  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:50:58.996899  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:50:58.997780  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:50:59.014380  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:50:59.029550  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:50:59.045055  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:50:59.060084  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:50:59.074803  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:50:59.089518  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:50:59.104113  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:50:59.119093  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:50:59.134192  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:50:59.150511  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:50:59.165586  242948 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:50:59.176450  242948 ssh_runner.go:149] Run: openssl version
	I0814 09:50:59.180631  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:50:59.187025  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.189664  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.189709  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.195171  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:50:59.201806  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:50:59.208282  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.211124  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.211160  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.215395  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:50:59.221855  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:50:59.228613  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.231438  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.231472  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.236446  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:50:59.243050  242948 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:50:59.243131  242948 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:50:59.243196  242948 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:50:59.266979  242948 cri.go:76] found id: ""
	I0814 09:50:59.267040  242948 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:50:59.273892  242948 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:50:59.280356  242948 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:50:59.280407  242948 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:50:59.286903  242948 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:50:59.286944  242948 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:50:59.556038  242948 out.go:204]   - Generating certificates and keys ...
	I0814 09:51:02.082739  242948 out.go:204]   - Booting up control plane ...
	I0814 09:51:15.625043  242948 out.go:204]   - Configuring RBAC rules ...
	I0814 09:51:16.037638  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:51:16.037662  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:51:16.039435  242948 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:51:16.039505  242948 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:51:16.042860  242948 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:51:16.042877  242948 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:51:16.054746  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:51:16.398428  242948 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:51:16.398487  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:16.398496  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=default-k8s-different-port-20210814095040-6746 minikube.k8s.io/updated_at=2021_08_14T09_51_16_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:16.413932  242948 ops.go:34] apiserver oom_adj: -16
	I0814 09:51:16.505770  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:17.072881  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:17.572393  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:18.072263  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:18.573168  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:19.072912  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:19.572947  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:20.072868  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:23.342009  242948 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.149988297s)
	I0814 09:51:23.572292  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:24.073122  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:24.573269  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	1862174cc5e0e       523cad1a4df73       26 seconds ago       Exited              dashboard-metrics-scraper   2                   913463a8ccf64
	efd9db92085eb       9a07b5b4bfac0       47 seconds ago       Running             kubernetes-dashboard        0                   7e54e81316330
	8ef370c3ebb55       6e38f40d628db       48 seconds ago       Running             storage-provisioner         0                   3cad7404cb4f2
	c2051bc4ae872       296a6d5035e2d       49 seconds ago       Running             coredns                     0                   021199ccb79c5
	c145e33f75c99       6de166512aa22       50 seconds ago       Running             kindnet-cni                 0                   c9bd7ea8434fc
	f1468c559df50       adb2816ea823a       51 seconds ago       Running             kube-proxy                  0                   5d392d166e020
	df5983ab2d3f9       6be0dc1302e30       About a minute ago   Running             kube-scheduler              0                   09addeb11662d
	36ad15e77f314       bc2bb319a7038       About a minute ago   Running             kube-controller-manager     0                   1c88459572814
	a2029d64830b8       3d174f00aa39e       About a minute ago   Running             kube-apiserver              0                   07ce24f2cc07a
	90b59592c23d4       0369cf4303ffd       About a minute ago   Running             etcd                        0                   247a0e8677d04
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:45:14 UTC, end at Sat 2021-08-14 09:51:25 UTC. --
	Aug 14 09:50:41 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:41.960862226Z" level=info msg="Container to stop \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.062121875Z" level=info msg="TaskExit event &TaskExit{ContainerID:c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65,ID:c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65,Pid:5654,ExitStatus:137,ExitedAt:2021-08-14 09:50:42.061866149 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.101739951Z" level=info msg="shim disconnected" id=c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.101820216Z" level=error msg="copy shim log" error="read /proc/self/fd/83: file already closed"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.188891167Z" level=info msg="TearDown network for sandbox \"c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65\" successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.188934907Z" level=info msg="StopPodSandbox for \"c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65\" returns successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.515303799Z" level=info msg="RemoveContainer for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.520544783Z" level=info msg="RemoveContainer for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" returns successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.521181572Z" level=error msg="ContainerStatus for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\": not found"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.522692420Z" level=info msg="RemoveContainer for \"6bf02b7a63d0e39db037c2f3b89fba9d4dfee9f801d768b39968f72bd9d2b45a\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.527393348Z" level=info msg="RemoveContainer for \"6bf02b7a63d0e39db037c2f3b89fba9d4dfee9f801d768b39968f72bd9d2b45a\" returns successfully"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.227521500Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.294737474Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.295926302Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.227729472Z" level=info msg="CreateContainer within sandbox \"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,}"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.262462808Z" level=info msg="CreateContainer within sandbox \"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,} returns container id \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.262950261Z" level=info msg="StartContainer for \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.434757635Z" level=info msg="StartContainer for \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\" returns successfully"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469151595Z" level=info msg="Finish piping stderr of container \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469184697Z" level=info msg="Finish piping stdout of container \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469980873Z" level=info msg="TaskExit event &TaskExit{ContainerID:1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82,ID:1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82,Pid:6740,ExitStatus:1,ExitedAt:2021-08-14 09:50:59.469759043 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.505397460Z" level=info msg="shim disconnected" id=1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.505481929Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.547120670Z" level=info msg="RemoveContainer for \"8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.551904618Z" level=info msg="RemoveContainer for \"8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0\" returns successfully"
	
	* 
	* ==> coredns [c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.032259] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vetha730867e
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 b0 2c 69 36 56 08 06        ........,i6V..
	[  +0.715640] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth2cf9a783
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff c6 ed 1c 18 61 89 08 06        ..........a...
	[  +0.453803] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethfd647b8c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e c9 5e 1b 0b 08 08 06        ........^.....
	[  +0.238950] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth66c80aa5
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 9d a2 94 49 09 08 06        ......B...I...
	[Aug14 09:50] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev veth219d8885
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 72 ae 3d be 32 47 08 06        ......r.=.2G..
	[  +0.407019] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethda4d8623
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff f2 c4 73 9e f2 b3 08 06        ........s.....
	[  +1.892879] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethbc400799
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 5a 18 0b d4 f0 08 06        .......Z......
	[  +0.451541] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethf3fb868f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 46 cd a5 37 a9 08 06        .......F..7...
	[  +0.899820] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth117eea46
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0e bd 0c 0c 46 f1 08 06        ..........F...
	[  +4.461460] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:51] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth89e105be
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 7f 5e f2 37 71 08 06        ........^.7q..
	[  +3.051603] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth1849f27d
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff aa 3e 04 01 85 4a 08 06        .......>...J..
	[  +8.683493] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethd1bc3625
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e 2d 8f 0c d8 ae 08 06        ......^-......
	
	* 
	* ==> etcd [90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55] <==
	* 2021-08-14 09:50:11.649106 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-14 09:50:11.649343 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)
	2021-08-14 09:50:11.649615 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-14 09:50:11.651328 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-14 09:50:11.651385 I | embed: listening for peers on 192.168.58.2:2380
	2021-08-14 09:50:11.651465 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 is starting a new election at term 1
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 became candidate at term 2
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 became leader at term 2
	raft2021/08/14 09:50:11 INFO: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-14 09:50:11.942588 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-14 09:50:11.943313 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-14 09:50:11.943369 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-14 09:50:11.943426 I | etcdserver: published {Name:embed-certs-20210814094325-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:50:11.943442 I | embed: ready to serve client requests
	2021-08-14 09:50:11.943553 I | embed: ready to serve client requests
	2021-08-14 09:50:11.945597 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-14 09:50:11.950474 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:50:29.041684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:34.901449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:44.865083 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:54.864171 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:51:04.864015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  09:51:47 up  1:34,  0 users,  load average: 1.02, 1.38, 1.67
	Linux embed-certs-20210814094325-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83] <==
	* W0814 09:51:43.530084       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	I0814 09:51:45.934751       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0814 09:51:46.003189       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0814 09:51:46.010073       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0814 09:51:46.039361       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0814 09:51:46.048550       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0814 09:51:46.715770       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0814 09:51:46.715784       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	E0814 09:51:47.216858       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	E0814 09:51:47.216858       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0814 09:51:47.217139       1 trace.go:205] Trace[1295897914]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:51:17.215) (total time: 30001ms):
	Trace[1295897914]: [30.001263177s] [30.001263177s] END
	I0814 09:51:47.218340       1 trace.go:205] Trace[532320195]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:51:16.023) (total time: 31195ms):
	Trace[532320195]: [31.195239627s] [31.195239627s] END
	I0814 09:51:47.218879       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0814 09:51:47.467128       1 trace.go:205] Trace[1565653822]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:51:25.687) (total time: 21779ms):
	Trace[1565653822]: [21.779128649s] [21.779128649s] END
	E0814 09:51:47.467186       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0814 09:51:47.467139       1 trace.go:205] Trace[764692540]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:51:16.122) (total time: 31344ms):
	Trace[764692540]: [31.344701697s] [31.344701697s] END
	E0814 09:51:47.467301       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0814 09:51:47.467491       1 trace.go:205] Trace[1811874254]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:51:25.687) (total time: 21779ms):
	Trace[1811874254]: [21.779539579s] [21.779539579s] END
	I0814 09:51:47.468774       1 trace.go:205] Trace[20416942]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.58.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:51:16.122) (total time: 31346ms):
	Trace[20416942]: [31.346355784s] [31.346355784s] END
	
	* 
	* ==> kube-controller-manager [36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee] <==
	* I0814 09:50:33.088763       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-c7vfk"
	I0814 09:50:35.313219       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0814 09:50:35.331213       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:50:35.418588       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:50:35.426804       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-5nrfw"
	I0814 09:50:36.029281       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0814 09:50:36.117463       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:50:36.121445       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0814 09:50:36.122885       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.127069       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.129271       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.129538       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.203476       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:50:36.205802       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.206089       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.209774       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.210060       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.211663       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.211731       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.213131       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.213178       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:50:36.224567       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-s5twx"
	I0814 09:50:36.308419       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-gmk5j"
	E0814 09:51:02.307907       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:51:02.731841       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9] <==
	* I0814 09:50:34.212539       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:50:34.212604       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:50:34.212632       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:50:34.412568       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:50:34.412610       1 server_others.go:212] Using iptables Proxier.
	I0814 09:50:34.412625       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:50:34.412658       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:50:34.413285       1 server.go:643] Version: v1.21.3
	I0814 09:50:34.420355       1 config.go:315] Starting service config controller
	I0814 09:50:34.420380       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:50:34.420424       1 config.go:224] Starting endpoint slice config controller
	I0814 09:50:34.420433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:50:34.425120       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:50:34.431401       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:50:34.520879       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:50:34.520934       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799] <==
	* W0814 09:50:15.808240       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 09:50:15.808258       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 09:50:15.808266       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 09:50:15.824968       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:50:15.825017       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0814 09:50:15.825026       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:50:15.826778       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 09:50:15.905345       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:50:15.909182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909298       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:50:15.909404       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909484       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:50:15.909560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:50:15.909645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:50:15.909712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:50:15.909763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:50:15.909808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909855       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:50:15.909905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:50:15.909963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:50:15.910012       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:16.719502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:16.731061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:50:16.736953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0814 09:50:17.527017       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:45:14 UTC, end at Sat 2021-08-14 09:51:47 UTC. --
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.241373    4845 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e384-ec50-41ce-9c60-fb8ed66f2b71-config-volume" (OuterVolumeSpecName: "config-volume") pod "4259e384-ec50-41ce-9c60-fb8ed66f2b71" (UID: "4259e384-ec50-41ce-9c60-fb8ed66f2b71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.265204    4845 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4259e384-ec50-41ce-9c60-fb8ed66f2b71-kube-api-access-nxgqp" (OuterVolumeSpecName: "kube-api-access-nxgqp") pod "4259e384-ec50-41ce-9c60-fb8ed66f2b71" (UID: "4259e384-ec50-41ce-9c60-fb8ed66f2b71"). InnerVolumeSpecName "kube-api-access-nxgqp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.342257    4845 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4259e384-ec50-41ce-9c60-fb8ed66f2b71-config-volume\") on node \"embed-certs-20210814094325-6746\" DevicePath \"\""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.342294    4845 reconciler.go:319] "Volume detached for volume \"kube-api-access-nxgqp\" (UniqueName: \"kubernetes.io/projected/4259e384-ec50-41ce-9c60-fb8ed66f2b71-kube-api-access-nxgqp\") on node \"embed-certs-20210814094325-6746\" DevicePath \"\""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.519347    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:43.519720    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: W0814 09:50:43.865362    4845 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod054dc08e-4bc7-4ae9-adf6-55f654ff6b86/8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0 WatchSource:0}: task 8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0 not found: not found
	Aug 14 09:50:46 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:46.321535    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:46 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:46.321811    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296140    4845 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296185    4845 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296308    4845 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vzhhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-5nrfw_kube-system(5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296350    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-5nrfw" podUID=5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.225580    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.546177    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.546474    4845 scope.go:111] "RemoveContainer" containerID="1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:59.546875    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:51:00 embed-certs-20210814094325-6746 kubelet[4845]: W0814 09:51:00.785937    4845 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod054dc08e-4bc7-4ae9-adf6-55f654ff6b86/1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82 WatchSource:0}: task 1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82 not found: not found
	Aug 14 09:51:02 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:51:02.226698    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-5nrfw" podUID=5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7
	Aug 14 09:51:06 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:51:06.321486    4845 scope.go:111] "RemoveContainer" containerID="1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	Aug 14 09:51:06 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:51:06.321858    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:51:09 embed-certs-20210814094325-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:51:09 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:51:09.049856    4845 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 14 09:51:09 embed-certs-20210814094325-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:51:09 embed-certs-20210814094325-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af] <==
	* 2021/08/14 09:50:38 Using namespace: kubernetes-dashboard
	2021/08/14 09:50:38 Using in-cluster config to connect to apiserver
	2021/08/14 09:50:38 Using secret token for csrf signing
	2021/08/14 09:50:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:50:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:50:38 Successful initial request to the apiserver, version: v1.21.3
	2021/08/14 09:50:38 Generating JWE encryption key
	2021/08/14 09:50:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:50:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:50:39 Initializing JWE encryption key from synchronized object
	2021/08/14 09:50:39 Creating in-cluster Sidecar client
	2021/08/14 09:50:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:50:39 Serving insecurely on HTTP port: 9090
	2021/08/14 09:51:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:50:38 Starting overwatch
	
	* 
	* ==> storage-provisioner [8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b] <==
	* I0814 09:50:36.806345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:50:36.814591       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:50:36.814644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:50:36.820487       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:50:36.820644       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e!
	I0814 09:50:36.821582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a0b34a7-1595-4e6c-a60d-8ec24e8b8d67", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e became leader
	I0814 09:50:36.921284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:51:47.471569  247565 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210814094325-6746
helpers_test.go:236: (dbg) docker inspect embed-certs-20210814094325-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576",
	        "Created": "2021-08-14T09:43:27.289846985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219779,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:45:14.416227785Z",
	            "FinishedAt": "2021-08-14T09:45:12.088163109Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hostname",
	        "HostsPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/hosts",
	        "LogPath": "/var/lib/docker/containers/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576/d2385af2cb057895324da8a96523cf61fb167cbbb57c0303799a22f65d14b576-json.log",
	        "Name": "/embed-certs-20210814094325-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210814094325-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210814094325-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a827f4ed82962ef26c4cedb302daa8f26074778b189bf117d8613e7d709be415/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210814094325-6746",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210814094325-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210814094325-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210814094325-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a6933adbbd3ce722a675e5adeafc189199fe1d8fada7eebf787d37f915e239a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32948"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1a6933adbbd3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210814094325-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d2385af2cb05"
	                    ],
	                    "NetworkID": "dbc6f9acad495850f4b0b885d051bfbd2cce05a9032571d93062419b0fbb36d2",
	                    "EndpointID": "8a463f9704f74ef43e52668f81108caad669353d43761c271d2d4d574c959212",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746: exit status 2 (15.744608748s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:52:03.501255  249597 status.go:422] Error apiserver status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210814094325-6746 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210814094325-6746 logs -n 25: exit status 110 (1m0.837144005s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:08 UTC | Sat, 14 Aug 2021 09:42:40 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:48 UTC | Sat, 14 Aug 2021 09:42:49 UTC |
	|         | no-preload-20210814094108-6746                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210814093902-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:41:38 UTC | Sat, 14 Aug 2021 09:43:05 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false                              |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:42:49 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:43:10 UTC |
	|         | no-preload-20210814094108-6746                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210814093902-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:16 UTC | Sat, 14 Aug 2021 09:43:16 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:18 UTC | Sat, 14 Aug 2021 09:43:19 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20210814093902-6746               | old-k8s-version-20210814093902-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:20 UTC | Sat, 14 Aug 2021 09:43:21 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:21 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20210814093902-6746            | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:43:25 UTC |
	|         | old-k8s-version-20210814093902-6746               |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                   | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:50 UTC | Sat, 14 Aug 2021 09:44:51 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:51 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                 | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:48:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:48:45 UTC | Sat, 14 Aug 2021 09:48:45 UTC |
	|         | no-preload-20210814094108-6746                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:34 UTC | Sat, 14 Aug 2021 09:50:38 UTC |
	|         | no-preload-20210814094108-6746                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:38 UTC | Sat, 14 Aug 2021 09:50:39 UTC |
	|         | no-preload-20210814094108-6746                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210814095039-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:39 UTC | Sat, 14 Aug 2021 09:50:40 UTC |
	|         | disable-driver-mounts-20210814095039-6746         |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:50:56 UTC |
	|         | embed-certs-20210814094325-6746                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=docker                                   |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                   | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:06 UTC | Sat, 14 Aug 2021 09:51:07 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:08 UTC | Sat, 14 Aug 2021 09:51:08 UTC |
	|         | embed-certs-20210814094325-6746                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:40 UTC | Sat, 14 Aug 2021 09:51:36 UTC |
	|         | default-k8s-different-port-20210814095040-6746    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:51:45 UTC |
	|         | default-k8s-different-port-20210814095040-6746    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:50:40
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:50:40.078160  242948 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:50:40.078244  242948 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:50:40.078254  242948 out.go:311] Setting ErrFile to fd 2...
	I0814 09:50:40.078258  242948 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:50:40.078366  242948 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:50:40.078628  242948 out.go:305] Setting JSON to false
	I0814 09:50:40.119352  242948 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5602,"bootTime":1628929038,"procs":276,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:50:40.119448  242948 start.go:121] virtualization: kvm guest
	I0814 09:50:40.122500  242948 out.go:177] * [default-k8s-different-port-20210814095040-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:50:40.124210  242948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:50:40.122699  242948 notify.go:169] Checking for updates...
	I0814 09:50:40.125676  242948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:50:40.127206  242948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:50:40.128678  242948 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:50:40.129277  242948 config.go:177] Loaded profile config "embed-certs-20210814094325-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:50:40.129398  242948 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:50:40.129490  242948 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:50:40.129531  242948 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:50:40.189038  242948 docker.go:132] docker version: linux-19.03.15
	I0814 09:50:40.189164  242948 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:50:40.289687  242948 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:50:40.235277528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:50:40.289786  242948 docker.go:244] overlay module found
	I0814 09:50:40.291517  242948 out.go:177] * Using the docker driver based on user configuration
	I0814 09:50:40.291541  242948 start.go:278] selected driver: docker
	I0814 09:50:40.291546  242948 start.go:751] validating driver "docker" against <nil>
	I0814 09:50:40.291562  242948 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:50:40.291608  242948 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:50:40.291627  242948 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:50:40.292971  242948 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:50:40.293780  242948 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:50:40.384600  242948 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:50:40.338311012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:50:40.384710  242948 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:50:40.384920  242948 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:50:40.384946  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:50:40.384954  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:50:40.384964  242948 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:50:40.384974  242948 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:50:40.384981  242948 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:50:40.384991  242948 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:50:40.387054  242948 out.go:177] * Starting control plane node default-k8s-different-port-20210814095040-6746 in cluster default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.387087  242948 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:50:40.388430  242948 out.go:177] * Pulling base image ...
	I0814 09:50:40.388458  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:40.388489  242948 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:50:40.388500  242948 cache.go:56] Caching tarball of preloaded images
	I0814 09:50:40.388547  242948 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:50:40.388667  242948 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:50:40.388684  242948 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:50:40.388818  242948 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json ...
	I0814 09:50:40.388847  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json: {Name:mk37096ce7d1c408ab2119b9d1016f0ec54511d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:40.477442  242948 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:50:40.477474  242948 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:50:40.477489  242948 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:50:40.477530  242948 start.go:313] acquiring machines lock for default-k8s-different-port-20210814095040-6746: {Name:mke7f558db837977766a2f1aff9770a5c1ff83a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:50:40.477640  242948 start.go:317] acquired machines lock for "default-k8s-different-port-20210814095040-6746" in 92.564µs
	I0814 09:50:40.477663  242948 start.go:89] Provisioning new machine with config: &{Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040
-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:50:40.477786  242948 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:50:37.722771  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:39.723935  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:41.724562  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:40.480990  242948 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0814 09:50:40.481210  242948 start.go:160] libmachine.API.Create for "default-k8s-different-port-20210814095040-6746" (driver="docker")
	I0814 09:50:40.481241  242948 client.go:168] LocalClient.Create starting
	I0814 09:50:40.481338  242948 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:50:40.481371  242948 main.go:130] libmachine: Decoding PEM data...
	I0814 09:50:40.481389  242948 main.go:130] libmachine: Parsing certificate...
	I0814 09:50:40.481488  242948 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:50:40.481510  242948 main.go:130] libmachine: Decoding PEM data...
	I0814 09:50:40.481528  242948 main.go:130] libmachine: Parsing certificate...
	I0814 09:50:40.481849  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:50:40.525737  242948 cli_runner.go:162] docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:50:40.525811  242948 network_create.go:255] running [docker network inspect default-k8s-different-port-20210814095040-6746] to gather additional debugging logs...
	I0814 09:50:40.525834  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746
	W0814 09:50:40.571336  242948 cli_runner.go:162] docker network inspect default-k8s-different-port-20210814095040-6746 returned with exit code 1
	I0814 09:50:40.571370  242948 network_create.go:258] error running [docker network inspect default-k8s-different-port-20210814095040-6746]: docker network inspect default-k8s-different-port-20210814095040-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.571397  242948 network_create.go:260] output of [docker network inspect default-k8s-different-port-20210814095040-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20210814095040-6746
	
	** /stderr **
	I0814 09:50:40.571446  242948 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:50:40.614259  242948 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000114740] misses:0}
	I0814 09:50:40.614311  242948 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:50:40.614327  242948 network_create.go:106] attempt to create docker network default-k8s-different-port-20210814095040-6746 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0814 09:50:40.614367  242948 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.694116  242948 network_create.go:90] docker network default-k8s-different-port-20210814095040-6746 192.168.49.0/24 created
	I0814 09:50:40.694168  242948 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20210814095040-6746" container
	I0814 09:50:40.694232  242948 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:50:40.739929  242948 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20210814095040-6746 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:50:40.779996  242948 oci.go:102] Successfully created a docker volume default-k8s-different-port-20210814095040-6746
	I0814 09:50:40.780078  242948 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20210814095040-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --entrypoint /usr/bin/test -v default-k8s-different-port-20210814095040-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:50:41.548292  242948 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20210814095040-6746
	W0814 09:50:41.548348  242948 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:50:41.548361  242948 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:50:41.548375  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:41.548406  242948 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:50:41.548418  242948 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:50:41.548477  242948 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210814095040-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:50:41.636582  242948 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20210814095040-6746 --name default-k8s-different-port-20210814095040-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20210814095040-6746 --network default-k8s-different-port-20210814095040-6746 --ip 192.168.49.2 --volume default-k8s-different-port-20210814095040-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:50:42.144372  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Running}}
	I0814 09:50:42.192564  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:42.243507  242948 cli_runner.go:115] Run: docker exec default-k8s-different-port-20210814095040-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:50:42.382169  242948 oci.go:278] the created container "default-k8s-different-port-20210814095040-6746" has a running status.
	I0814 09:50:42.382207  242948 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa...
	I0814 09:50:42.445995  242948 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:50:42.839357  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:42.883219  242948 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:50:42.883247  242948 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210814095040-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:50:44.223377  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:46.723426  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:45.601178  242948 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210814095040-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.052608109s)
	I0814 09:50:45.601212  242948 kic.go:188] duration metric: took 4.052804 seconds to extract preloaded images to volume
	I0814 09:50:45.601281  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:50:45.639926  242948 machine.go:88] provisioning docker machine ...
	I0814 09:50:45.639958  242948 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210814095040-6746"
	I0814 09:50:45.640004  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:45.677111  242948 main.go:130] libmachine: Using SSH client type: native
	I0814 09:50:45.677287  242948 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I0814 09:50:45.677302  242948 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210814095040-6746 && echo "default-k8s-different-port-20210814095040-6746" | sudo tee /etc/hostname
	I0814 09:50:45.811627  242948 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210814095040-6746
	
	I0814 09:50:45.811696  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:45.851031  242948 main.go:130] libmachine: Using SSH client type: native
	I0814 09:50:45.851173  242948 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32953 <nil> <nil>}
	I0814 09:50:45.851198  242948 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210814095040-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210814095040-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210814095040-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:50:45.971942  242948 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:50:45.971970  242948 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:50:45.972021  242948 ubuntu.go:177] setting up certificates
	I0814 09:50:45.972032  242948 provision.go:83] configureAuth start
	I0814 09:50:45.972081  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.010084  242948 provision.go:138] copyHostCerts
	I0814 09:50:46.010154  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:50:46.010173  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:50:46.010236  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:50:46.010318  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:50:46.010330  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:50:46.010360  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:50:46.010420  242948 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:50:46.010429  242948 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:50:46.010454  242948 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:50:46.010510  242948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210814095040-6746 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210814095040-6746]
	I0814 09:50:46.128382  242948 provision.go:172] copyRemoteCerts
	I0814 09:50:46.128444  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:50:46.128496  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.168879  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.259375  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:50:46.275048  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0814 09:50:46.289966  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:50:46.304803  242948 provision.go:86] duration metric: configureAuth took 332.753132ms
	I0814 09:50:46.304824  242948 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:50:46.304953  242948 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:50:46.304963  242948 machine.go:91] provisioned docker machine in 665.019762ms
	I0814 09:50:46.304997  242948 client.go:171] LocalClient.Create took 5.823722197s
	I0814 09:50:46.305019  242948 start.go:168] duration metric: libmachine.API.Create for "default-k8s-different-port-20210814095040-6746" took 5.823809022s
	I0814 09:50:46.305031  242948 start.go:267] post-start starting for "default-k8s-different-port-20210814095040-6746" (driver="docker")
	I0814 09:50:46.305037  242948 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:50:46.305081  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:50:46.305111  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.345433  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.439381  242948 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:50:46.441947  242948 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:50:46.441969  242948 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:50:46.441986  242948 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:50:46.441995  242948 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:50:46.442005  242948 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:50:46.442045  242948 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:50:46.442142  242948 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:50:46.442245  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:50:46.448155  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:50:46.463470  242948 start.go:270] post-start completed in 158.429661ms
	I0814 09:50:46.463755  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.503225  242948 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/config.json ...
	I0814 09:50:46.503476  242948 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:50:46.503519  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.540563  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.625051  242948 start.go:129] duration metric: createHost completed in 6.147253606s
	I0814 09:50:46.625076  242948 start.go:80] releasing machines lock for "default-k8s-different-port-20210814095040-6746", held for 6.147423912s
	I0814 09:50:46.625156  242948 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.664584  242948 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:50:46.664645  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.664652  242948 ssh_runner.go:149] Run: systemctl --version
	I0814 09:50:46.664697  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:50:46.707048  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.707255  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:50:46.821908  242948 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:50:46.831031  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:50:46.839119  242948 docker.go:153] disabling docker service ...
	I0814 09:50:46.839168  242948 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:50:46.853257  242948 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:50:46.861067  242948 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:50:46.923543  242948 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:50:46.978774  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:50:46.986852  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:50:46.997957  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:50:47.009652  242948 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:50:47.015207  242948 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:50:47.015245  242948 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:50:47.021672  242948 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:50:47.027332  242948 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:50:47.082686  242948 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:50:47.143652  242948 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:50:47.143716  242948 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:50:47.146808  242948 start.go:413] Will wait 60s for crictl version
	I0814 09:50:47.146863  242948 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:50:47.169179  242948 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:50:47Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:50:49.222994  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:51.223160  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:53.223991  219213 pod_ready.go:102] pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace has status "Ready":"False"
	I0814 09:50:54.720501  219213 pod_ready.go:97] error getting pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-c7vfk" not found
	I0814 09:50:54.720529  219213 pod_ready.go:81] duration metric: took 21.007916602s waiting for pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace to be "Ready" ...
	E0814 09:50:54.720541  219213 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-c7vfk" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-c7vfk" not found
	I0814 09:50:54.720550  219213 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.724877  219213 pod_ready.go:92] pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.724893  219213 pod_ready.go:81] duration metric: took 4.331809ms waiting for pod "coredns-558bd4d5db-wjlqr" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.724903  219213 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.728530  219213 pod_ready.go:92] pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.728548  219213 pod_ready.go:81] duration metric: took 3.638427ms waiting for pod "etcd-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.728567  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.732170  219213 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.732186  219213 pod_ready.go:81] duration metric: took 3.612156ms waiting for pod "kube-apiserver-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.732196  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.735668  219213 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.735682  219213 pod_ready.go:81] duration metric: took 3.480378ms waiting for pod "kube-controller-manager-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.735691  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xcshh" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.920884  219213 pod_ready.go:92] pod "kube-proxy-xcshh" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:54.920903  219213 pod_ready.go:81] duration metric: took 185.206559ms waiting for pod "kube-proxy-xcshh" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:54.920913  219213 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:55.321495  219213 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:50:55.321518  219213 pod_ready.go:81] duration metric: took 400.598171ms waiting for pod "kube-scheduler-embed-certs-20210814094325-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:50:55.321529  219213 pod_ready.go:38] duration metric: took 21.625428997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:50:55.321547  219213 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:50:55.321592  219213 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:50:55.343154  219213 api_server.go:70] duration metric: took 21.759811987s to wait for apiserver process to appear ...
	I0814 09:50:55.343176  219213 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:50:55.343186  219213 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:50:55.347349  219213 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:50:55.348107  219213 api_server.go:139] control plane version: v1.21.3
	I0814 09:50:55.348127  219213 api_server.go:129] duration metric: took 4.944829ms to wait for apiserver health ...
	I0814 09:50:55.348136  219213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:50:55.523276  219213 system_pods.go:59] 9 kube-system pods found
	I0814 09:50:55.523298  219213 system_pods.go:61] "coredns-558bd4d5db-wjlqr" [052286f2-685f-4520-8f5c-13e35b07e27e] Running
	I0814 09:50:55.523303  219213 system_pods.go:61] "etcd-embed-certs-20210814094325-6746" [62f460fe-d11d-4e50-a549-f9a153888a5d] Running
	I0814 09:50:55.523306  219213 system_pods.go:61] "kindnet-kvv65" [c0bc8515-5565-4fb1-a82d-d01bc090d641] Running
	I0814 09:50:55.523311  219213 system_pods.go:61] "kube-apiserver-embed-certs-20210814094325-6746" [04913668-df62-4d8e-8166-fe1aaf7ba56b] Running
	I0814 09:50:55.523316  219213 system_pods.go:61] "kube-controller-manager-embed-certs-20210814094325-6746" [57b659e5-19e6-415a-995a-3e92b39b5a41] Running
	I0814 09:50:55.523319  219213 system_pods.go:61] "kube-proxy-xcshh" [cbf58cc2-48cb-4eca-8d30-904694fbb480] Running
	I0814 09:50:55.523323  219213 system_pods.go:61] "kube-scheduler-embed-certs-20210814094325-6746" [ccdd236c-694a-4805-a6cd-7fa58b99395e] Running
	I0814 09:50:55.523332  219213 system_pods.go:61] "metrics-server-7c784ccb57-5nrfw" [5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:50:55.523338  219213 system_pods.go:61] "storage-provisioner" [3f6d1385-66f0-49e9-a561-d557c138f7b6] Running
	I0814 09:50:55.523344  219213 system_pods.go:74] duration metric: took 175.203116ms to wait for pod list to return data ...
	I0814 09:50:55.523353  219213 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:50:55.721303  219213 default_sa.go:45] found service account: "default"
	I0814 09:50:55.721329  219213 default_sa.go:55] duration metric: took 197.969622ms for default service account to be created ...
	I0814 09:50:55.721339  219213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:50:55.923597  219213 system_pods.go:86] 9 kube-system pods found
	I0814 09:50:55.923624  219213 system_pods.go:89] "coredns-558bd4d5db-wjlqr" [052286f2-685f-4520-8f5c-13e35b07e27e] Running
	I0814 09:50:55.923629  219213 system_pods.go:89] "etcd-embed-certs-20210814094325-6746" [62f460fe-d11d-4e50-a549-f9a153888a5d] Running
	I0814 09:50:55.923634  219213 system_pods.go:89] "kindnet-kvv65" [c0bc8515-5565-4fb1-a82d-d01bc090d641] Running
	I0814 09:50:55.923639  219213 system_pods.go:89] "kube-apiserver-embed-certs-20210814094325-6746" [04913668-df62-4d8e-8166-fe1aaf7ba56b] Running
	I0814 09:50:55.923644  219213 system_pods.go:89] "kube-controller-manager-embed-certs-20210814094325-6746" [57b659e5-19e6-415a-995a-3e92b39b5a41] Running
	I0814 09:50:55.923651  219213 system_pods.go:89] "kube-proxy-xcshh" [cbf58cc2-48cb-4eca-8d30-904694fbb480] Running
	I0814 09:50:55.923655  219213 system_pods.go:89] "kube-scheduler-embed-certs-20210814094325-6746" [ccdd236c-694a-4805-a6cd-7fa58b99395e] Running
	I0814 09:50:55.923663  219213 system_pods.go:89] "metrics-server-7c784ccb57-5nrfw" [5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:50:55.923670  219213 system_pods.go:89] "storage-provisioner" [3f6d1385-66f0-49e9-a561-d557c138f7b6] Running
	I0814 09:50:55.923677  219213 system_pods.go:126] duration metric: took 202.332969ms to wait for k8s-apps to be running ...
	I0814 09:50:55.923687  219213 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:50:55.923726  219213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:50:55.932502  219213 system_svc.go:56] duration metric: took 8.810307ms WaitForService to wait for kubelet.
	I0814 09:50:55.932525  219213 kubeadm.go:547] duration metric: took 22.349186518s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:50:55.932552  219213 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:50:56.122070  219213 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:50:56.122094  219213 node_conditions.go:123] node cpu capacity is 8
	I0814 09:50:56.122107  219213 node_conditions.go:105] duration metric: took 189.549407ms to run NodePressure ...
	I0814 09:50:56.122116  219213 start.go:231] waiting for startup goroutines ...
	I0814 09:50:56.165685  219213 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:50:56.167880  219213 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210814094325-6746" cluster and "default" namespace by default
	I0814 09:50:58.219059  242948 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:50:58.309835  242948 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:50:58.309907  242948 ssh_runner.go:149] Run: containerd --version
	I0814 09:50:58.330960  242948 ssh_runner.go:149] Run: containerd --version
	I0814 09:50:58.353208  242948 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:50:58.353286  242948 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210814095040-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:50:58.391168  242948 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0814 09:50:58.394266  242948 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:50:58.403013  242948 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:50:58.403075  242948 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:50:58.424249  242948 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:50:58.424264  242948 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:50:58.424296  242948 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:50:58.444133  242948 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:50:58.444150  242948 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:50:58.444182  242948 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:50:58.464037  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:50:58.464053  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:50:58.464062  242948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:50:58.464075  242948 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210814095040-6746 NodeName:default-k8s-different-port-20210814095040-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:50:58.464192  242948 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210814095040-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:50:58.464276  242948 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210814095040-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0814 09:50:58.464314  242948 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:50:58.470390  242948 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:50:58.470444  242948 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:50:58.476398  242948 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0814 09:50:58.487492  242948 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:50:58.498448  242948 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0814 09:50:58.509594  242948 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:50:58.512068  242948 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:50:58.520004  242948 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746 for IP: 192.168.49.2
	I0814 09:50:58.520042  242948 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:50:58.520057  242948 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:50:58.520106  242948 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key
	I0814 09:50:58.520115  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt with IP's: []
	I0814 09:50:58.605811  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt ...
	I0814 09:50:58.605832  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: {Name:mkacaae754c3f3d8a12af248e60d4f2dfeb1fcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.605983  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key ...
	I0814 09:50:58.605995  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.key: {Name:mkc908febb624f8dcae4593839bc3cdd86a1ad31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.606077  242948 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2
	I0814 09:50:58.606087  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:50:58.792141  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 ...
	I0814 09:50:58.792164  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2: {Name:mk1ce321d1a9a1e324dde7b9a016555ddd6031d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.792303  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2 ...
	I0814 09:50:58.792317  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2: {Name:mk8ca5617228b674c440c829a6a0ed6ba7adf225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.792390  242948 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt
	I0814 09:50:58.792489  242948 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key
	I0814 09:50:58.792543  242948 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key
	I0814 09:50:58.792551  242948 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt with IP's: []
	I0814 09:50:58.996340  242948 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt ...
	I0814 09:50:58.996371  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt: {Name:mk55a688d6c41aa245f7d2d45cd1b092fbfe314a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.996534  242948 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key ...
	I0814 09:50:58.996546  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key: {Name:mk240bac2833d6f959e53ffe7865c747fc43bc7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:50:58.996701  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:50:58.996736  242948 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:50:58.996746  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:50:58.996770  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:50:58.996833  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:50:58.996856  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:50:58.996899  242948 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:50:58.997780  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:50:59.014380  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:50:59.029550  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:50:59.045055  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:50:59.060084  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:50:59.074803  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:50:59.089518  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:50:59.104113  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:50:59.119093  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:50:59.134192  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:50:59.150511  242948 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:50:59.165586  242948 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:50:59.176450  242948 ssh_runner.go:149] Run: openssl version
	I0814 09:50:59.180631  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:50:59.187025  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.189664  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.189709  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:50:59.195171  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:50:59.201806  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:50:59.208282  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.211124  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.211160  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:50:59.215395  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:50:59.221855  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:50:59.228613  242948 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.231438  242948 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.231472  242948 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:50:59.236446  242948 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:50:59.243050  242948 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210814095040-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210814095040-6746 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:50:59.243131  242948 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:50:59.243196  242948 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:50:59.266979  242948 cri.go:76] found id: ""
	I0814 09:50:59.267040  242948 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:50:59.273892  242948 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:50:59.280356  242948 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:50:59.280407  242948 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:50:59.286903  242948 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:50:59.286944  242948 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:50:59.556038  242948 out.go:204]   - Generating certificates and keys ...
	I0814 09:51:02.082739  242948 out.go:204]   - Booting up control plane ...
	I0814 09:51:15.625043  242948 out.go:204]   - Configuring RBAC rules ...
	I0814 09:51:16.037638  242948 cni.go:93] Creating CNI manager for ""
	I0814 09:51:16.037662  242948 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:51:16.039435  242948 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:51:16.039505  242948 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:51:16.042860  242948 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:51:16.042877  242948 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:51:16.054746  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:51:16.398428  242948 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:51:16.398487  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:16.398496  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=default-k8s-different-port-20210814095040-6746 minikube.k8s.io/updated_at=2021_08_14T09_51_16_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:16.413932  242948 ops.go:34] apiserver oom_adj: -16
	I0814 09:51:16.505770  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:17.072881  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:17.572393  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:18.072263  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:18.573168  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:19.072912  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:19.572947  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:20.072868  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:23.342009  242948 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.149988297s)
	I0814 09:51:23.572292  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:24.073122  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:24.573269  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:25.072616  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:25.572567  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:26.072311  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:26.572387  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:27.072516  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:27.572279  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:28.072505  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:28.573189  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:29.072874  242948 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:29.138319  242948 kubeadm.go:985] duration metric: took 12.739887122s to wait for elevateKubeSystemPrivileges.
	I0814 09:51:29.138354  242948 kubeadm.go:392] StartCluster complete in 29.89530926s
	I0814 09:51:29.138374  242948 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:29.138449  242948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:51:29.139978  242948 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:29.657798  242948 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210814095040-6746" rescaled to 1
	I0814 09:51:29.657866  242948 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:51:29.659651  242948 out.go:177] * Verifying Kubernetes components...
	I0814 09:51:29.657923  242948 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0814 09:51:29.659725  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:29.659760  242948 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:51:29.657925  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:51:29.659796  242948 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:51:29.659807  242948 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:51:29.659824  242948 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:51:29.659839  242948 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:51:29.659848  242948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210814095040-6746"
	I0814 09:51:29.658108  242948 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:51:29.660202  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:51:29.660375  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:51:29.716396  242948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:51:29.716481  242948 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:29.716492  242948 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:51:29.716533  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:51:29.718685  242948 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:51:29.718705  242948 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:51:29.718732  242948 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:51:29.719277  242948 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:51:29.760207  242948 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:51:29.762702  242948 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210814095040-6746" to be "Ready" ...
	I0814 09:51:29.766525  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:51:29.771365  242948 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:29.771387  242948 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:51:29.771442  242948 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:51:29.815241  242948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32953 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:51:29.919610  242948 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:30.023216  242948 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:30.309474  242948 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0814 09:51:30.545473  242948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0814 09:51:30.545500  242948 addons.go:344] enableAddons completed in 887.587292ms
	I0814 09:51:31.769262  242948 node_ready.go:58] node "default-k8s-different-port-20210814095040-6746" has status "Ready":"False"
	I0814 09:51:33.770012  242948 node_ready.go:49] node "default-k8s-different-port-20210814095040-6746" has status "Ready":"True"
	I0814 09:51:33.770036  242948 node_ready.go:38] duration metric: took 4.007308727s waiting for node "default-k8s-different-port-20210814095040-6746" to be "Ready" ...
	I0814 09:51:33.770045  242948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:51:33.778600  242948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-8wdmx" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.289078  242948 pod_ready.go:92] pod "coredns-558bd4d5db-8wdmx" in "kube-system" namespace has status "Ready":"True"
	I0814 09:51:35.289107  242948 pod_ready.go:81] duration metric: took 1.51048183s waiting for pod "coredns-558bd4d5db-8wdmx" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.289119  242948 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.292748  242948 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:51:35.292764  242948 pod_ready.go:81] duration metric: took 3.636364ms waiting for pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.292775  242948 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.296578  242948 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:51:35.296592  242948 pod_ready.go:81] duration metric: took 3.811386ms waiting for pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.296600  242948 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.300048  242948 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:51:35.300063  242948 pod_ready.go:81] duration metric: took 3.455566ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.300072  242948 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w4gdt" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.303558  242948 pod_ready.go:92] pod "kube-proxy-w4gdt" in "kube-system" namespace has status "Ready":"True"
	I0814 09:51:35.303573  242948 pod_ready.go:81] duration metric: took 3.49573ms waiting for pod "kube-proxy-w4gdt" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.303581  242948 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.687464  242948 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:51:35.687483  242948 pod_ready.go:81] duration metric: took 383.895496ms waiting for pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:51:35.687493  242948 pod_ready.go:38] duration metric: took 1.917437648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:51:35.687507  242948 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:51:35.687542  242948 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:51:35.706239  242948 api_server.go:70] duration metric: took 6.048342302s to wait for apiserver process to appear ...
	I0814 09:51:35.706267  242948 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:51:35.706278  242948 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0814 09:51:35.710341  242948 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0814 09:51:35.711055  242948 api_server.go:139] control plane version: v1.21.3
	I0814 09:51:35.711074  242948 api_server.go:129] duration metric: took 4.801657ms to wait for apiserver health ...
	I0814 09:51:35.711081  242948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:51:35.894146  242948 system_pods.go:59] 8 kube-system pods found
	I0814 09:51:35.894216  242948 system_pods.go:61] "coredns-558bd4d5db-8wdmx" [67529a19-9d58-492f-9d21-35afcbb2c797] Running
	I0814 09:51:35.894240  242948 system_pods.go:61] "etcd-default-k8s-different-port-20210814095040-6746" [40686201-4565-4e25-81b7-6a9ac4e1c205] Running
	I0814 09:51:35.894260  242948 system_pods.go:61] "kindnet-lfgh9" [97b7e62b-7b6f-4527-9364-1a23db8b8fc2] Running
	I0814 09:51:35.894281  242948 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210814095040-6746" [fc4925dd-79bc-4fe9-97c6-6d48025abf31] Running
	I0814 09:51:35.894307  242948 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210814095040-6746" [c62fe36e-eb31-4ebe-ac0c-08d3cd275fd4] Running
	I0814 09:51:35.894326  242948 system_pods.go:61] "kube-proxy-w4gdt" [6baaf170-a540-4194-8597-53afe188a695] Running
	I0814 09:51:35.894347  242948 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210814095040-6746" [4ed01fd4-4f2f-41b7-b0ea-90dd28cafa08] Running
	I0814 09:51:35.894366  242948 system_pods.go:61] "storage-provisioner" [67430634-ec3c-4a0b-9498-0f61b64f800b] Running
	I0814 09:51:35.894387  242948 system_pods.go:74] duration metric: took 183.299243ms to wait for pod list to return data ...
	I0814 09:51:35.894408  242948 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:51:36.087791  242948 default_sa.go:45] found service account: "default"
	I0814 09:51:36.087811  242948 default_sa.go:55] duration metric: took 193.385318ms for default service account to be created ...
	I0814 09:51:36.087822  242948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:51:36.290092  242948 system_pods.go:86] 8 kube-system pods found
	I0814 09:51:36.290117  242948 system_pods.go:89] "coredns-558bd4d5db-8wdmx" [67529a19-9d58-492f-9d21-35afcbb2c797] Running
	I0814 09:51:36.290123  242948 system_pods.go:89] "etcd-default-k8s-different-port-20210814095040-6746" [40686201-4565-4e25-81b7-6a9ac4e1c205] Running
	I0814 09:51:36.290127  242948 system_pods.go:89] "kindnet-lfgh9" [97b7e62b-7b6f-4527-9364-1a23db8b8fc2] Running
	I0814 09:51:36.290132  242948 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210814095040-6746" [fc4925dd-79bc-4fe9-97c6-6d48025abf31] Running
	I0814 09:51:36.290137  242948 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210814095040-6746" [c62fe36e-eb31-4ebe-ac0c-08d3cd275fd4] Running
	I0814 09:51:36.290141  242948 system_pods.go:89] "kube-proxy-w4gdt" [6baaf170-a540-4194-8597-53afe188a695] Running
	I0814 09:51:36.290145  242948 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210814095040-6746" [4ed01fd4-4f2f-41b7-b0ea-90dd28cafa08] Running
	I0814 09:51:36.290148  242948 system_pods.go:89] "storage-provisioner" [67430634-ec3c-4a0b-9498-0f61b64f800b] Running
	I0814 09:51:36.290153  242948 system_pods.go:126] duration metric: took 202.326945ms to wait for k8s-apps to be running ...
	I0814 09:51:36.290161  242948 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:51:36.290197  242948 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:36.300603  242948 system_svc.go:56] duration metric: took 10.437199ms WaitForService to wait for kubelet.
	I0814 09:51:36.300625  242948 kubeadm.go:547] duration metric: took 6.642730816s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:51:36.300646  242948 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:51:36.488097  242948 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:51:36.488121  242948 node_conditions.go:123] node cpu capacity is 8
	I0814 09:51:36.488131  242948 node_conditions.go:105] duration metric: took 187.481923ms to run NodePressure ...
	I0814 09:51:36.488141  242948 start.go:231] waiting for startup goroutines ...
	I0814 09:51:36.529501  242948 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:51:36.531757  242948 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210814095040-6746" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	1862174cc5e0e       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   2                   913463a8ccf64
	efd9db92085eb       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   7e54e81316330
	8ef370c3ebb55       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   3cad7404cb4f2
	c2051bc4ae872       296a6d5035e2d       About a minute ago   Running             coredns                     0                   021199ccb79c5
	c145e33f75c99       6de166512aa22       About a minute ago   Running             kindnet-cni                 0                   c9bd7ea8434fc
	f1468c559df50       adb2816ea823a       About a minute ago   Running             kube-proxy                  0                   5d392d166e020
	df5983ab2d3f9       6be0dc1302e30       About a minute ago   Running             kube-scheduler              0                   09addeb11662d
	36ad15e77f314       bc2bb319a7038       About a minute ago   Running             kube-controller-manager     0                   1c88459572814
	a2029d64830b8       3d174f00aa39e       About a minute ago   Running             kube-apiserver              0                   07ce24f2cc07a
	90b59592c23d4       0369cf4303ffd       About a minute ago   Running             etcd                        0                   247a0e8677d04
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:45:14 UTC, end at Sat 2021-08-14 09:52:03 UTC. --
	Aug 14 09:50:41 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:41.960862226Z" level=info msg="Container to stop \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.062121875Z" level=info msg="TaskExit event &TaskExit{ContainerID:c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65,ID:c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65,Pid:5654,ExitStatus:137,ExitedAt:2021-08-14 09:50:42.061866149 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.101739951Z" level=info msg="shim disconnected" id=c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.101820216Z" level=error msg="copy shim log" error="read /proc/self/fd/83: file already closed"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.188891167Z" level=info msg="TearDown network for sandbox \"c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65\" successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.188934907Z" level=info msg="StopPodSandbox for \"c9f8cab27fa24b40b942f37544095a1f1128ace8dc335284a50bac79d60ada65\" returns successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.515303799Z" level=info msg="RemoveContainer for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.520544783Z" level=info msg="RemoveContainer for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" returns successfully"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.521181572Z" level=error msg="ContainerStatus for \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87fc25516f825514bc6a052f092dcabe9b964f2a813cac32fa40a50134c5e90c\": not found"
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.522692420Z" level=info msg="RemoveContainer for \"6bf02b7a63d0e39db037c2f3b89fba9d4dfee9f801d768b39968f72bd9d2b45a\""
	Aug 14 09:50:42 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:42.527393348Z" level=info msg="RemoveContainer for \"6bf02b7a63d0e39db037c2f3b89fba9d4dfee9f801d768b39968f72bd9d2b45a\" returns successfully"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.227521500Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.294737474Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" host=fake.domain
	Aug 14 09:50:51 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:51.295926302Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.227729472Z" level=info msg="CreateContainer within sandbox \"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,}"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.262462808Z" level=info msg="CreateContainer within sandbox \"913463a8ccf64a22c58014f2949057bcab44a1027e03d90e4e666b0393526c9c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:2,} returns container id \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.262950261Z" level=info msg="StartContainer for \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.434757635Z" level=info msg="StartContainer for \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\" returns successfully"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469151595Z" level=info msg="Finish piping stderr of container \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469184697Z" level=info msg="Finish piping stdout of container \"1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.469980873Z" level=info msg="TaskExit event &TaskExit{ContainerID:1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82,ID:1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82,Pid:6740,ExitStatus:1,ExitedAt:2021-08-14 09:50:59.469759043 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.505397460Z" level=info msg="shim disconnected" id=1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.505481929Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.547120670Z" level=info msg="RemoveContainer for \"8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0\""
	Aug 14 09:50:59 embed-certs-20210814094325-6746 containerd[336]: time="2021-08-14T09:50:59.551904618Z" level=info msg="RemoveContainer for \"8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0\" returns successfully"
	
	* 
	* ==> coredns [c2051bc4ae8724836fece2ca06268cde848802ecd77d139e6b35c4d067dfc9b5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.117256] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethdf2317bd
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e a8 32 d5 4d 7f 08 06        ......~.2.M...
	[  +0.035630] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth0b3713f0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 08 7c e8 24 aa 08 06        ........|.$...
	[  +0.851133] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000003] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +2.011842] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +4.227682] net_ratelimit: 2 callbacks suppressed
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000000] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +8.187413] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000025] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	
	* 
	* ==> etcd [90b59592c23d4916217ab3df49e4f36263dae73b3188ac93917d7c579962cb55] <==
	* 2021-08-14 09:50:11.649106 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-14 09:50:11.649343 I | etcdserver: b2c6679ac05f2cf1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)
	2021-08-14 09:50:11.649615 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-14 09:50:11.651328 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-14 09:50:11.651385 I | embed: listening for peers on 192.168.58.2:2380
	2021-08-14 09:50:11.651465 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 is starting a new election at term 1
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 became candidate at term 2
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2
	raft2021/08/14 09:50:11 INFO: b2c6679ac05f2cf1 became leader at term 2
	raft2021/08/14 09:50:11 INFO: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2
	2021-08-14 09:50:11.942588 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-14 09:50:11.943313 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-14 09:50:11.943369 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-14 09:50:11.943426 I | etcdserver: published {Name:embed-certs-20210814094325-6746 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-14 09:50:11.943442 I | embed: ready to serve client requests
	2021-08-14 09:50:11.943553 I | embed: ready to serve client requests
	2021-08-14 09:50:11.945597 I | embed: serving client requests on 192.168.58.2:2379
	2021-08-14 09:50:11.950474 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:50:29.041684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:34.901449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:44.865083 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:50:54.864171 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:51:04.864015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  09:53:04 up  1:35,  0 users,  load average: 1.07, 1.31, 1.61
	Linux embed-certs-20210814094325-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a2029d64830b8cc096ea505bda7b0334dd1ceda315758acb811abf9d3030dc83] <==
	* W0814 09:52:51.905500       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:52:54.107548       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	E0814 09:52:57.222862       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:52:57.222991       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:52:57.224586       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:52:57.224590       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:52:57.225651       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:52:57.227183       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:52:57.228299       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:52:57.229443       1 trace.go:205] Trace[275417738]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:51:57.222) (total time: 60006ms):
	Trace[275417738]: [1m0.006526448s] [1m0.006526448s] END
	E0814 09:52:57.230508       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:52:57.231638       1 trace.go:205] Trace[483581216]: "Get" url:/api/v1/namespaces/kube-public,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:51:57.224) (total time: 60007ms):
	Trace[483581216]: [1m0.007015437s] [1m0.007015437s] END
	W0814 09:52:59.152108       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:53:00.481584       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0814 09:53:00.688112       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0814 09:53:04.060910       1 trace.go:205] Trace[1394152956]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:52:04.061) (total time: 59999ms):
	Trace[1394152956]: [59.99977478s] [59.99977478s] END
	E0814 09:53:04.060941       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0814 09:53:04.061054       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0814 09:53:04.100889       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0814 09:53:04.102017       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0814 09:53:04.103975       1 trace.go:205] Trace[315890774]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (14-Aug-2021 09:52:04.061) (total time: 60042ms):
	Trace[315890774]: [1m0.042853003s] [1m0.042853003s] END
	
	* 
	* ==> kube-controller-manager [36ad15e77f3143b061a2fde6617d6193878b3c467cae35b50071312b70c710ee] <==
	* I0814 09:50:33.088763       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-c7vfk"
	I0814 09:50:35.313219       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0814 09:50:35.331213       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:50:35.418588       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:50:35.426804       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-5nrfw"
	I0814 09:50:36.029281       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0814 09:50:36.117463       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:50:36.121445       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0814 09:50:36.122885       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.127069       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.129271       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.129538       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.203476       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:50:36.205802       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.206089       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.209774       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.210060       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.211663       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.211731       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:50:36.213131       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:50:36.213178       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:50:36.224567       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-s5twx"
	I0814 09:50:36.308419       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-gmk5j"
	E0814 09:51:02.307907       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:51:02.731841       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f1468c559df5065d27e78f84c1085bb1fa45c50cdaf89c77e3817016555855f9] <==
	* I0814 09:50:34.212539       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:50:34.212604       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:50:34.212632       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:50:34.412568       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:50:34.412610       1 server_others.go:212] Using iptables Proxier.
	I0814 09:50:34.412625       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:50:34.412658       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:50:34.413285       1 server.go:643] Version: v1.21.3
	I0814 09:50:34.420355       1 config.go:315] Starting service config controller
	I0814 09:50:34.420380       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:50:34.420424       1 config.go:224] Starting endpoint slice config controller
	I0814 09:50:34.420433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0814 09:50:34.425120       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:50:34.431401       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:50:34.520879       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:50:34.520934       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [df5983ab2d3f968b6e18b6352faec3b27768fce3ddf849aca55f9807fd30c799] <==
	* W0814 09:50:15.808240       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 09:50:15.808258       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 09:50:15.808266       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 09:50:15.824968       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:50:15.825017       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0814 09:50:15.825026       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:50:15.826778       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 09:50:15.905345       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:50:15.909182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909298       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:50:15.909404       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909484       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:50:15.909560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:50:15.909645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:50:15.909712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:50:15.909763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:50:15.909808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:15.909855       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:50:15.909905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:50:15.909963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:50:15.910012       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:16.719502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:50:16.731061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:50:16.736953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0814 09:50:17.527017       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:45:14 UTC, end at Sat 2021-08-14 09:53:04 UTC. --
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.241373    4845 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e384-ec50-41ce-9c60-fb8ed66f2b71-config-volume" (OuterVolumeSpecName: "config-volume") pod "4259e384-ec50-41ce-9c60-fb8ed66f2b71" (UID: "4259e384-ec50-41ce-9c60-fb8ed66f2b71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.265204    4845 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4259e384-ec50-41ce-9c60-fb8ed66f2b71-kube-api-access-nxgqp" (OuterVolumeSpecName: "kube-api-access-nxgqp") pod "4259e384-ec50-41ce-9c60-fb8ed66f2b71" (UID: "4259e384-ec50-41ce-9c60-fb8ed66f2b71"). InnerVolumeSpecName "kube-api-access-nxgqp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.342257    4845 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4259e384-ec50-41ce-9c60-fb8ed66f2b71-config-volume\") on node \"embed-certs-20210814094325-6746\" DevicePath \"\""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.342294    4845 reconciler.go:319] "Volume detached for volume \"kube-api-access-nxgqp\" (UniqueName: \"kubernetes.io/projected/4259e384-ec50-41ce-9c60-fb8ed66f2b71-kube-api-access-nxgqp\") on node \"embed-certs-20210814094325-6746\" DevicePath \"\""
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:43.519347    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:43.519720    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:50:43 embed-certs-20210814094325-6746 kubelet[4845]: W0814 09:50:43.865362    4845 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod054dc08e-4bc7-4ae9-adf6-55f654ff6b86/8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0 WatchSource:0}: task 8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0 not found: not found
	Aug 14 09:50:46 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:46.321535    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:46 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:46.321811    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296140    4845 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296185    4845 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296308    4845 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vzhhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-5nrfw_kube-system(5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host
	Aug 14 09:50:51 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:51.296350    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.58.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-5nrfw" podUID=5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.225580    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.546177    4845 scope.go:111] "RemoveContainer" containerID="8302a078987bf156840dee6868d8b9336479cd4f2f22edeb901eb970c9637ed0"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:50:59.546474    4845 scope.go:111] "RemoveContainer" containerID="1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	Aug 14 09:50:59 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:50:59.546875    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:51:00 embed-certs-20210814094325-6746 kubelet[4845]: W0814 09:51:00.785937    4845 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod054dc08e-4bc7-4ae9-adf6-55f654ff6b86/1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82 WatchSource:0}: task 1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82 not found: not found
	Aug 14 09:51:02 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:51:02.226698    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-5nrfw" podUID=5fdab3ce-8f70-4d45-8bf8-fad6c17b49a7
	Aug 14 09:51:06 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:51:06.321486    4845 scope.go:111] "RemoveContainer" containerID="1862174cc5e0e50b1606f1d6f946c39cd9929764270ea8d6a5edf8b5596eef82"
	Aug 14 09:51:06 embed-certs-20210814094325-6746 kubelet[4845]: E0814 09:51:06.321858    4845 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gmk5j_kubernetes-dashboard(054dc08e-4bc7-4ae9-adf6-55f654ff6b86)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gmk5j" podUID=054dc08e-4bc7-4ae9-adf6-55f654ff6b86
	Aug 14 09:51:09 embed-certs-20210814094325-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:51:09 embed-certs-20210814094325-6746 kubelet[4845]: I0814 09:51:09.049856    4845 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 14 09:51:09 embed-certs-20210814094325-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:51:09 embed-certs-20210814094325-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [efd9db92085eb26718ded2b06f500de7834dd5ee613d4e3f578da32167b384af] <==
	* 2021/08/14 09:50:38 Using namespace: kubernetes-dashboard
	2021/08/14 09:50:38 Using in-cluster config to connect to apiserver
	2021/08/14 09:50:38 Using secret token for csrf signing
	2021/08/14 09:50:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:50:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:50:38 Successful initial request to the apiserver, version: v1.21.3
	2021/08/14 09:50:38 Generating JWE encryption key
	2021/08/14 09:50:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:50:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:50:39 Initializing JWE encryption key from synchronized object
	2021/08/14 09:50:39 Creating in-cluster Sidecar client
	2021/08/14 09:50:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:50:39 Serving insecurely on HTTP port: 9090
	2021/08/14 09:51:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:51:49 Metric client health check failed: an error on the server ("unknown") has prevented the request from succeeding (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:50:38 Starting overwatch
	
	* 
	* ==> storage-provisioner [8ef370c3ebb5558820986128e6bf34f822cc8824bc0b9ecac95426b15f11531b] <==
	* I0814 09:50:36.806345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:50:36.814591       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:50:36.814644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:50:36.820487       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:50:36.820644       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e!
	I0814 09:50:36.821582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a0b34a7-1595-4e6c-a60d-8ec24e8b8d67", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e became leader
	I0814 09:50:36.921284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210814094325-6746_753362cf-bbab-4029-a269-4e1698aeb42e!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:53:04.103937  250054 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (115.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (24.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210814095308-6746 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20210814095308-6746 --alsologtostderr -v=1: exit status 80 (1.735145966s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210814095308-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:55:04.939498  266810 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:55:04.939584  266810 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:55:04.939589  266810 out.go:311] Setting ErrFile to fd 2...
	I0814 09:55:04.939592  266810 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:55:04.939712  266810 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:55:04.939894  266810 out.go:305] Setting JSON to false
	I0814 09:55:04.939912  266810 mustload.go:65] Loading cluster: newest-cni-20210814095308-6746
	I0814 09:55:04.940532  266810 config.go:177] Loaded profile config "newest-cni-20210814095308-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:55:04.941690  266810 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:04.980979  266810 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:04.981667  266810 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210814095308-6746 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0814 09:55:04.983778  266810 out.go:177] * Pausing node newest-cni-20210814095308-6746 ... 
	I0814 09:55:04.983803  266810 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:04.984052  266810 ssh_runner.go:149] Run: systemctl --version
	I0814 09:55:04.984088  266810 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:05.023013  266810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:05.119834  266810 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:55:05.129570  266810 pause.go:50] kubelet running: true
	I0814 09:55:05.129644  266810 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:55:05.248878  266810 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:55:05.248964  266810 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:55:05.327297  266810 cri.go:76] found id: "ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92"
	I0814 09:55:05.327322  266810 cri.go:76] found id: "5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e"
	I0814 09:55:05.327329  266810 cri.go:76] found id: "d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83"
	I0814 09:55:05.327334  266810 cri.go:76] found id: "4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea"
	I0814 09:55:05.327339  266810 cri.go:76] found id: "c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc"
	I0814 09:55:05.327346  266810 cri.go:76] found id: "e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da"
	I0814 09:55:05.327351  266810 cri.go:76] found id: "3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08"
	I0814 09:55:05.327357  266810 cri.go:76] found id: "5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf"
	I0814 09:55:05.327361  266810 cri.go:76] found id: "8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0"
	I0814 09:55:05.327374  266810 cri.go:76] found id: "7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581"
	I0814 09:55:05.327383  266810 cri.go:76] found id: "2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4"
	I0814 09:55:05.327389  266810 cri.go:76] found id: "97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce"
	I0814 09:55:05.327397  266810 cri.go:76] found id: ""
	I0814 09:55:05.327447  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:55:05.358994  266810 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d/rootfs","created":"2021-08-14T09:54:57.640997516Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210814095308-6746_47a3143e2feb1ae8894f32915a29bd17"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689","pid":963,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1801c8f895d52405b4877e7bc5
6632017325c0e6078003fae0ccffa66e650689","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689/rootfs","created":"2021-08-14T09:54:57.67300542Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210814095308-6746_b7b9ff3c18fe1ab2f1225f2ac28dc5df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","pid":1234,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e/rootfs","created":"2021-08-14T09:55:02.365136206Z","annotations":{"io.kubernetes.cri.con
tainer-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5jhgl_1c1f42ab-2957-447b-9911-2959da7ffe6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea","pid":1087,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea/rootfs","created":"2021-08-14T09:54:57.965061876Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e"
,"pid":1272,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e/rootfs","created":"2021-08-14T09:55:02.553019762Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df/rootfs","created":"2021-08-14T09:54:57.673029128Z","annotations":{"io.kubern
etes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210814095308-6746_07054ab79fc868fa0fe8c7dfc466d014"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","pid":1241,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47/rootfs","created":"2021-08-14T09:55:02.573044795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-qmdzb_515a8aac-189c-47a3-9a37-3210ef5cfd44"},"owner":"
root"},{"ociVersion":"1.0.2-dev","id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059/rootfs","created":"2021-08-14T09:54:57.672943376Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210814095308-6746_c03752632ea898d3845de95b85585861"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc","pid":1078,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc","
rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc/rootfs","created":"2021-08-14T09:54:57.965014498Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83","pid":1088,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83/rootfs","created":"2021-08-14T09:54:57.965043501Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1801c8f895d52405b4877e7bc5
6632017325c0e6078003fae0ccffa66e650689"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da","pid":1040,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da/rootfs","created":"2021-08-14T09:54:57.912948942Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92","pid":1353,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92","rootfs":"/run/containerd/io.containerd.runtime.v2.task
/k8s.io/ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92/rootfs","created":"2021-08-14T09:55:02.99698346Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47"},"owner":"root"}]
	I0814 09:55:05.359233  266810 cri.go:113] list returned 12 containers
	I0814 09:55:05.359254  266810 cri.go:116] container: {ID:17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d Status:running}
	I0814 09:55:05.359268  266810 cri.go:118] skipping 17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d - not in ps
	I0814 09:55:05.359274  266810 cri.go:116] container: {ID:1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689 Status:running}
	I0814 09:55:05.359282  266810 cri.go:118] skipping 1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689 - not in ps
	I0814 09:55:05.359290  266810 cri.go:116] container: {ID:3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e Status:running}
	I0814 09:55:05.359301  266810 cri.go:118] skipping 3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e - not in ps
	I0814 09:55:05.359311  266810 cri.go:116] container: {ID:4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea Status:running}
	I0814 09:55:05.359321  266810 cri.go:116] container: {ID:5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e Status:running}
	I0814 09:55:05.359332  266810 cri.go:116] container: {ID:9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df Status:running}
	I0814 09:55:05.359342  266810 cri.go:118] skipping 9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df - not in ps
	I0814 09:55:05.359351  266810 cri.go:116] container: {ID:9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 Status:running}
	I0814 09:55:05.359355  266810 cri.go:118] skipping 9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 - not in ps
	I0814 09:55:05.359359  266810 cri.go:116] container: {ID:be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059 Status:running}
	I0814 09:55:05.359365  266810 cri.go:118] skipping be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059 - not in ps
	I0814 09:55:05.359373  266810 cri.go:116] container: {ID:c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc Status:running}
	I0814 09:55:05.359380  266810 cri.go:116] container: {ID:d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83 Status:running}
	I0814 09:55:05.359386  266810 cri.go:116] container: {ID:e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da Status:running}
	I0814 09:55:05.359397  266810 cri.go:116] container: {ID:ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92 Status:running}
	I0814 09:55:05.359445  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea
	I0814 09:55:05.374223  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea 5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e
	I0814 09:55:05.386766  266810 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea 5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:55:05Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:55:05.663278  266810 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:55:05.672661  266810 pause.go:50] kubelet running: false
	I0814 09:55:05.672710  266810 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:55:05.745284  266810 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:55:05.745368  266810 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:55:05.812569  266810 cri.go:76] found id: "ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92"
	I0814 09:55:05.812598  266810 cri.go:76] found id: "5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e"
	I0814 09:55:05.812613  266810 cri.go:76] found id: "d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83"
	I0814 09:55:05.812619  266810 cri.go:76] found id: "4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea"
	I0814 09:55:05.812628  266810 cri.go:76] found id: "c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc"
	I0814 09:55:05.812634  266810 cri.go:76] found id: "e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da"
	I0814 09:55:05.812639  266810 cri.go:76] found id: "3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08"
	I0814 09:55:05.812650  266810 cri.go:76] found id: "5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf"
	I0814 09:55:05.812659  266810 cri.go:76] found id: "8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0"
	I0814 09:55:05.812671  266810 cri.go:76] found id: "7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581"
	I0814 09:55:05.812679  266810 cri.go:76] found id: "2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4"
	I0814 09:55:05.812684  266810 cri.go:76] found id: "97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce"
	I0814 09:55:05.812692  266810 cri.go:76] found id: ""
	I0814 09:55:05.812738  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:55:05.843156  266810 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d/rootfs","created":"2021-08-14T09:54:57.640997516Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210814095308-6746_47a3143e2feb1ae8894f32915a29bd17"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689","pid":963,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1801c8f895d52405b4877e7bc5
6632017325c0e6078003fae0ccffa66e650689","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689/rootfs","created":"2021-08-14T09:54:57.67300542Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210814095308-6746_b7b9ff3c18fe1ab2f1225f2ac28dc5df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","pid":1234,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e/rootfs","created":"2021-08-14T09:55:02.365136206Z","annotations":{"io.kubernetes.cri.con
tainer-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5jhgl_1c1f42ab-2957-447b-9911-2959da7ffe6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea","pid":1087,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea/rootfs","created":"2021-08-14T09:54:57.965061876Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e",
"pid":1272,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e/rootfs","created":"2021-08-14T09:55:02.553019762Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df/rootfs","created":"2021-08-14T09:54:57.673029128Z","annotations":{"io.kuberne
tes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210814095308-6746_07054ab79fc868fa0fe8c7dfc466d014"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","pid":1241,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47/rootfs","created":"2021-08-14T09:55:02.573044795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-qmdzb_515a8aac-189c-47a3-9a37-3210ef5cfd44"},"owner":"r
oot"},{"ociVersion":"1.0.2-dev","id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059/rootfs","created":"2021-08-14T09:54:57.672943376Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210814095308-6746_c03752632ea898d3845de95b85585861"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc","pid":1078,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc","r
ootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc/rootfs","created":"2021-08-14T09:54:57.965014498Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83","pid":1088,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83/rootfs","created":"2021-08-14T09:54:57.965043501Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1801c8f895d52405b4877e7bc56
632017325c0e6078003fae0ccffa66e650689"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da","pid":1040,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da/rootfs","created":"2021-08-14T09:54:57.912948942Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92","pid":1353,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92","rootfs":"/run/containerd/io.containerd.runtime.v2.task/
k8s.io/ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92/rootfs","created":"2021-08-14T09:55:02.99698346Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47"},"owner":"root"}]
	I0814 09:55:05.843330  266810 cri.go:113] list returned 12 containers
	I0814 09:55:05.843345  266810 cri.go:116] container: {ID:17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d Status:running}
	I0814 09:55:05.843359  266810 cri.go:118] skipping 17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d - not in ps
	I0814 09:55:05.843368  266810 cri.go:116] container: {ID:1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689 Status:running}
	I0814 09:55:05.843378  266810 cri.go:118] skipping 1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689 - not in ps
	I0814 09:55:05.843386  266810 cri.go:116] container: {ID:3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e Status:running}
	I0814 09:55:05.843395  266810 cri.go:118] skipping 3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e - not in ps
	I0814 09:55:05.843404  266810 cri.go:116] container: {ID:4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea Status:paused}
	I0814 09:55:05.843412  266810 cri.go:122] skipping {4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea paused}: state = "paused", want "running"
	I0814 09:55:05.843431  266810 cri.go:116] container: {ID:5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e Status:running}
	I0814 09:55:05.843441  266810 cri.go:116] container: {ID:9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df Status:running}
	I0814 09:55:05.843448  266810 cri.go:118] skipping 9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df - not in ps
	I0814 09:55:05.843458  266810 cri.go:116] container: {ID:9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 Status:running}
	I0814 09:55:05.843469  266810 cri.go:118] skipping 9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 - not in ps
	I0814 09:55:05.843477  266810 cri.go:116] container: {ID:be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059 Status:running}
	I0814 09:55:05.843484  266810 cri.go:118] skipping be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059 - not in ps
	I0814 09:55:05.843494  266810 cri.go:116] container: {ID:c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc Status:running}
	I0814 09:55:05.843498  266810 cri.go:116] container: {ID:d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83 Status:running}
	I0814 09:55:05.843504  266810 cri.go:116] container: {ID:e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da Status:running}
	I0814 09:55:05.843516  266810 cri.go:116] container: {ID:ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92 Status:running}
	I0814 09:55:05.843560  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e
	I0814 09:55:05.858702  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc
	I0814 09:55:05.870694  266810 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:55:05Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:55:06.411365  266810 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:55:06.420694  266810 pause.go:50] kubelet running: false
	I0814 09:55:06.420734  266810 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:55:06.495153  266810 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:55:06.495220  266810 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:55:06.562478  266810 cri.go:76] found id: "ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92"
	I0814 09:55:06.562500  266810 cri.go:76] found id: "5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e"
	I0814 09:55:06.562507  266810 cri.go:76] found id: "d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83"
	I0814 09:55:06.562513  266810 cri.go:76] found id: "4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea"
	I0814 09:55:06.562518  266810 cri.go:76] found id: "c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc"
	I0814 09:55:06.562524  266810 cri.go:76] found id: "e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da"
	I0814 09:55:06.562529  266810 cri.go:76] found id: "3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08"
	I0814 09:55:06.562534  266810 cri.go:76] found id: "5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf"
	I0814 09:55:06.562540  266810 cri.go:76] found id: "8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0"
	I0814 09:55:06.562551  266810 cri.go:76] found id: "7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581"
	I0814 09:55:06.562560  266810 cri.go:76] found id: "2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4"
	I0814 09:55:06.562566  266810 cri.go:76] found id: "97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce"
	I0814 09:55:06.562575  266810 cri.go:76] found id: ""
	I0814 09:55:06.562616  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:55:06.592660  266810 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d/rootfs","created":"2021-08-14T09:54:57.640997516Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210814095308-6746_47a3143e2feb1ae8894f32915a29bd17"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689","pid":963,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1801c8f895d52405b4877e7bc5
6632017325c0e6078003fae0ccffa66e650689","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689/rootfs","created":"2021-08-14T09:54:57.67300542Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210814095308-6746_b7b9ff3c18fe1ab2f1225f2ac28dc5df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","pid":1234,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e/rootfs","created":"2021-08-14T09:55:02.365136206Z","annotations":{"io.kubernetes.cri.con
tainer-type":"sandbox","io.kubernetes.cri.sandbox-id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5jhgl_1c1f42ab-2957-447b-9911-2959da7ffe6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea","pid":1087,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea/rootfs","created":"2021-08-14T09:54:57.965061876Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e",
"pid":1272,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e/rootfs","created":"2021-08-14T09:55:02.553019762Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df/rootfs","created":"2021-08-14T09:54:57.673029128Z","annotations":{"io.kubernet
es.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210814095308-6746_07054ab79fc868fa0fe8c7dfc466d014"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","pid":1241,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47/rootfs","created":"2021-08-14T09:55:02.573044795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-qmdzb_515a8aac-189c-47a3-9a37-3210ef5cfd44"},"owner":"ro
ot"},{"ociVersion":"1.0.2-dev","id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059/rootfs","created":"2021-08-14T09:54:57.672943376Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210814095308-6746_c03752632ea898d3845de95b85585861"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc","pid":1078,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc","ro
otfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc/rootfs","created":"2021-08-14T09:54:57.965014498Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83","pid":1088,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83/rootfs","created":"2021-08-14T09:54:57.965043501Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1801c8f895d52405b4877e7bc566
32017325c0e6078003fae0ccffa66e650689"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da","pid":1040,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da/rootfs","created":"2021-08-14T09:54:57.912948942Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92","pid":1353,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k
8s.io/ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92/rootfs","created":"2021-08-14T09:55:02.99698346Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47"},"owner":"root"}]
	I0814 09:55:06.592844  266810 cri.go:113] list returned 12 containers
	I0814 09:55:06.592862  266810 cri.go:116] container: {ID:17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d Status:running}
	I0814 09:55:06.592875  266810 cri.go:118] skipping 17cf182148edbd61bdfec7a34e1ac1cad4051a3faebd016f2dacce7769eb784d - not in ps
	I0814 09:55:06.592882  266810 cri.go:116] container: {ID:1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689 Status:running}
	I0814 09:55:06.592886  266810 cri.go:118] skipping 1801c8f895d52405b4877e7bc56632017325c0e6078003fae0ccffa66e650689 - not in ps
	I0814 09:55:06.592892  266810 cri.go:116] container: {ID:3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e Status:running}
	I0814 09:55:06.592897  266810 cri.go:118] skipping 3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e - not in ps
	I0814 09:55:06.592903  266810 cri.go:116] container: {ID:4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea Status:paused}
	I0814 09:55:06.592908  266810 cri.go:122] skipping {4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea paused}: state = "paused", want "running"
	I0814 09:55:06.592919  266810 cri.go:116] container: {ID:5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e Status:paused}
	I0814 09:55:06.592924  266810 cri.go:122] skipping {5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e paused}: state = "paused", want "running"
	I0814 09:55:06.592931  266810 cri.go:116] container: {ID:9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df Status:running}
	I0814 09:55:06.592935  266810 cri.go:118] skipping 9bcaab314a378551fd514fbdb74cc4bf8723523ecf307d312e14c246b2a0c9df - not in ps
	I0814 09:55:06.592941  266810 cri.go:116] container: {ID:9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 Status:running}
	I0814 09:55:06.592945  266810 cri.go:118] skipping 9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 - not in ps
	I0814 09:55:06.592952  266810 cri.go:116] container: {ID:be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059 Status:running}
	I0814 09:55:06.592956  266810 cri.go:118] skipping be7b27c626e04d34bca4af0fdba522b4f2ac14852dbe413c4c7e19c73bd65059 - not in ps
	I0814 09:55:06.592962  266810 cri.go:116] container: {ID:c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc Status:running}
	I0814 09:55:06.592966  266810 cri.go:116] container: {ID:d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83 Status:running}
	I0814 09:55:06.592973  266810 cri.go:116] container: {ID:e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da Status:running}
	I0814 09:55:06.592981  266810 cri.go:116] container: {ID:ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92 Status:running}
	I0814 09:55:06.593017  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc
	I0814 09:55:06.607022  266810 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83
	I0814 09:55:06.621824  266810 out.go:177] 
	W0814 09:55:06.621983  266810 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:55:06Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:55:06Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0814 09:55:06.622010  266810 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0814 09:55:06.624519  266810 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0814 09:55:06.625968  266810 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p newest-cni-20210814095308-6746 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210814095308-6746
helpers_test.go:236: (dbg) docker inspect newest-cni-20210814095308-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991",
	        "Created": "2021-08-14T09:53:10.1692035Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263769,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:54:31.652141454Z",
	            "FinishedAt": "2021-08-14T09:54:29.33137841Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/hostname",
	        "HostsPath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/hosts",
	        "LogPath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991-json.log",
	        "Name": "/newest-cni-20210814095308-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20210814095308-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210814095308-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210814095308-6746",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210814095308-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210814095308-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210814095308-6746",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210814095308-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "010e8632f9b86b5c6acf54143964fcf7624f3525f114dcb1789424ea05fb3eb4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/010e8632f9b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210814095308-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "00d6fdeb074b"
	                    ],
	                    "NetworkID": "405b862b9dfe05154028ef97fb7bca891d96549290a433a955934c71cb864401",
	                    "EndpointID": "9f98f403d076b32d4a427c735f816e86decf4fa3ca7ad1bcdd7d5c1719345134",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210814095308-6746 -n newest-cni-20210814095308-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210814095308-6746 -n newest-cni-20210814095308-6746: exit status 2 (305.258665ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210814095308-6746 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210814095308-6746 logs -n 25: exit status 110 (10.840546488s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                            | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:50 UTC | Sat, 14 Aug 2021 09:44:51 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:51 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                          | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:48:31 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:48:45 UTC | Sat, 14 Aug 2021 09:48:45 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:34 UTC | Sat, 14 Aug 2021 09:50:38 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:38 UTC | Sat, 14 Aug 2021 09:50:39 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20210814095039-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:39 UTC | Sat, 14 Aug 2021 09:50:40 UTC |
	|         | disable-driver-mounts-20210814095039-6746                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:50:56 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                            | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:06 UTC | Sat, 14 Aug 2021 09:51:07 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:08 UTC | Sat, 14 Aug 2021 09:51:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:40 UTC | Sat, 14 Aug 2021 09:51:36 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:51:45 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:52:06 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:04 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:54:08 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:08 UTC | Sat, 14 Aug 2021 09:54:09 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:09 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:04 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:54:29
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:54:29.972378  263219 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:54:29.972462  263219 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:29.972475  263219 out.go:311] Setting ErrFile to fd 2...
	I0814 09:54:29.972479  263219 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:29.972573  263219 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:54:29.972846  263219 out.go:305] Setting JSON to false
	I0814 09:54:30.009462  263219 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5832,"bootTime":1628929038,"procs":267,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:54:30.009542  263219 start.go:121] virtualization: kvm guest
	I0814 09:54:30.011519  263219 out.go:177] * [newest-cni-20210814095308-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:54:30.013033  263219 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:54:30.011654  263219 notify.go:169] Checking for updates...
	I0814 09:54:30.014430  263219 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:54:30.015850  263219 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:54:30.017244  263219 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:54:30.017641  263219 config.go:177] Loaded profile config "newest-cni-20210814095308-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:54:30.018126  263219 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:54:30.066375  263219 docker.go:132] docker version: linux-19.03.15
	I0814 09:54:30.066454  263219 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:54:30.145259  263219 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:54:30.101615392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:54:30.145366  263219 docker.go:244] overlay module found
	I0814 09:54:30.147397  263219 out.go:177] * Using the docker driver based on existing profile
	I0814 09:54:30.147421  263219 start.go:278] selected driver: docker
	I0814 09:54:30.147429  263219 start.go:751] validating driver "docker" against &{Name:newest-cni-20210814095308-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] Verify
Components:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:54:30.147526  263219 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:54:30.147562  263219 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:54:30.147578  263219 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:54:30.148943  263219 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:54:30.149784  263219 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:54:30.228921  263219 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:54:30.185568693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0814 09:54:30.229046  263219 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:54:30.229076  263219 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:54:30.231218  263219 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:54:30.231327  263219 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 09:54:30.231354  263219 cni.go:93] Creating CNI manager for ""
	I0814 09:54:30.231362  263219 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:54:30.231375  263219 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:54:30.231391  263219 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:54:30.231402  263219 start_flags.go:277] config:
	{Name:newest-cni-20210814095308-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kub
elet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:54:30.233150  263219 out.go:177] * Starting control plane node newest-cni-20210814095308-6746 in cluster newest-cni-20210814095308-6746
	I0814 09:54:30.233187  263219 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:54:30.234581  263219 out.go:177] * Pulling base image ...
	I0814 09:54:30.234612  263219 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:54:30.234649  263219 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:54:30.234666  263219 cache.go:56] Caching tarball of preloaded images
	I0814 09:54:30.234721  263219 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:54:30.234868  263219 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:54:30.234885  263219 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0814 09:54:30.235033  263219 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/config.json ...
	I0814 09:54:30.322925  263219 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:54:30.322948  263219 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:54:30.322968  263219 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:54:30.322999  263219 start.go:313] acquiring machines lock for newest-cni-20210814095308-6746: {Name:mka71e6cef7914d8cc25826ac188b3d65cc88bef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:54:30.323105  263219 start.go:317] acquired machines lock for "newest-cni-20210814095308-6746" in 61.331µs
	I0814 09:54:30.323124  263219 start.go:93] Skipping create...Using existing machine configuration
	I0814 09:54:30.323131  263219 fix.go:55] fixHost starting: 
	I0814 09:54:30.323368  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:54:30.365622  263219 fix.go:108] recreateIfNeeded on newest-cni-20210814095308-6746: state=Stopped err=<nil>
	W0814 09:54:30.365659  263219 fix.go:134] unexpected machine state, will restart: <nil>
	I0814 09:54:28.757277  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:30.759710  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:30.367988  263219 out.go:177] * Restarting existing docker container for "newest-cni-20210814095308-6746" ...
	I0814 09:54:30.368066  263219 cli_runner.go:115] Run: docker start newest-cni-20210814095308-6746
	I0814 09:54:31.659073  263219 cli_runner.go:168] Completed: docker start newest-cni-20210814095308-6746: (1.290967649s)
	I0814 09:54:31.659166  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:54:31.696414  263219 kic.go:420] container "newest-cni-20210814095308-6746" state is running.
	I0814 09:54:31.696848  263219 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210814095308-6746
	I0814 09:54:31.738430  263219 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/config.json ...
	I0814 09:54:31.738644  263219 machine.go:88] provisioning docker machine ...
	I0814 09:54:31.738681  263219 ubuntu.go:169] provisioning hostname "newest-cni-20210814095308-6746"
	I0814 09:54:31.738736  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:31.779017  263219 main.go:130] libmachine: Using SSH client type: native
	I0814 09:54:31.779246  263219 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0814 09:54:31.779273  263219 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210814095308-6746 && echo "newest-cni-20210814095308-6746" | sudo tee /etc/hostname
	I0814 09:54:31.779718  263219 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42876->127.0.0.1:32968: read: connection reset by peer
	I0814 09:54:34.911471  263219 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210814095308-6746
	
	I0814 09:54:34.911538  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:33.258147  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:35.757363  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:34.949564  263219 main.go:130] libmachine: Using SSH client type: native
	I0814 09:54:34.949802  263219 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0814 09:54:34.949842  263219 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210814095308-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210814095308-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210814095308-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:54:35.071890  263219 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:54:35.071921  263219 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:54:35.071945  263219 ubuntu.go:177] setting up certificates
	I0814 09:54:35.071954  263219 provision.go:83] configureAuth start
	I0814 09:54:35.072001  263219 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210814095308-6746
	I0814 09:54:35.110468  263219 provision.go:138] copyHostCerts
	I0814 09:54:35.110530  263219 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:54:35.110547  263219 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:54:35.110596  263219 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:54:35.110667  263219 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:54:35.110677  263219 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:54:35.110699  263219 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:54:35.110748  263219 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:54:35.110757  263219 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:54:35.110771  263219 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:54:35.110805  263219 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210814095308-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210814095308-6746]
	I0814 09:54:35.284923  263219 provision.go:172] copyRemoteCerts
	I0814 09:54:35.284990  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:54:35.285042  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.324256  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.411115  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:54:35.426265  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0814 09:54:35.441057  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:54:35.455785  263219 provision.go:86] duration metric: configureAuth took 383.821198ms
	I0814 09:54:35.455810  263219 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:54:35.455969  263219 config.go:177] Loaded profile config "newest-cni-20210814095308-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:54:35.455980  263219 machine.go:91] provisioned docker machine in 3.717319089s
	I0814 09:54:35.455987  263219 start.go:267] post-start starting for "newest-cni-20210814095308-6746" (driver="docker")
	I0814 09:54:35.455993  263219 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:54:35.456031  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:54:35.456067  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.494054  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.583272  263219 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:54:35.585764  263219 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:54:35.585796  263219 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:54:35.585805  263219 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:54:35.585810  263219 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:54:35.585818  263219 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:54:35.585859  263219 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:54:35.585928  263219 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:54:35.586011  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:54:35.591981  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:54:35.607225  263219 start.go:270] post-start completed in 151.226337ms
	I0814 09:54:35.607277  263219 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:54:35.607310  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.645581  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.732601  263219 fix.go:57] fixHost completed within 5.409464523s
	I0814 09:54:35.732629  263219 start.go:80] releasing machines lock for "newest-cni-20210814095308-6746", held for 5.409513777s
	I0814 09:54:35.732697  263219 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210814095308-6746
	I0814 09:54:35.772352  263219 ssh_runner.go:149] Run: systemctl --version
	I0814 09:54:35.772403  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.772429  263219 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:54:35.772485  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.811915  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.812173  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.896233  263219 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:54:35.922223  263219 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:54:35.930767  263219 docker.go:153] disabling docker service ...
	I0814 09:54:35.930808  263219 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:54:35.939528  263219 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:54:35.947344  263219 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:54:36.003011  263219 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:54:36.054666  263219 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:54:36.062627  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:54:36.073757  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:54:36.085148  263219 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:54:36.090731  263219 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:54:36.090767  263219 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:54:36.097400  263219 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:54:36.102890  263219 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:54:36.156054  263219 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:54:36.224541  263219 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:54:36.224605  263219 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:54:36.227881  263219 start.go:413] Will wait 60s for crictl version
	I0814 09:54:36.227926  263219 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:54:36.249600  263219 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:54:36Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:54:38.258254  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:40.757314  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:43.258416  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:45.757513  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:47.297983  263219 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:54:47.320600  263219 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:54:47.320649  263219 ssh_runner.go:149] Run: containerd --version
	I0814 09:54:47.343783  263219 ssh_runner.go:149] Run: containerd --version
	I0814 09:54:47.367813  263219 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0814 09:54:47.367887  263219 cli_runner.go:115] Run: docker network inspect newest-cni-20210814095308-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:54:47.405957  263219 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:54:47.409142  263219 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:54:47.420131  263219 out.go:177]   - kubelet.network-plugin=cni
	I0814 09:54:47.423383  263219 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0814 09:54:47.423457  263219 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:54:47.423512  263219 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:54:47.446811  263219 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:54:47.446829  263219 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:54:47.446862  263219 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:54:47.468474  263219 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:54:47.468493  263219 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:54:47.468529  263219 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:54:47.488870  263219 cni.go:93] Creating CNI manager for ""
	I0814 09:54:47.488893  263219 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:54:47.488904  263219 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0814 09:54:47.488917  263219 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210814095308-6746 NodeName:newest-cni-20210814095308-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-
elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:54:47.489049  263219 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210814095308-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:54:47.489142  263219 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210814095308-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:54:47.489186  263219 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0814 09:54:47.495406  263219 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:54:47.495466  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:54:47.501317  263219 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (617 bytes)
	I0814 09:54:47.512521  263219 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0814 09:54:47.523490  263219 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0814 09:54:47.534519  263219 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:54:47.537142  263219 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:54:47.545276  263219 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746 for IP: 192.168.58.2
	I0814 09:54:47.545312  263219 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:54:47.545325  263219 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:54:47.545371  263219 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/client.key
	I0814 09:54:47.545397  263219 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/apiserver.key.cee25041
	I0814 09:54:47.545412  263219 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/proxy-client.key
	I0814 09:54:47.545509  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:54:47.545548  263219 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:54:47.545558  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:54:47.545583  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:54:47.545609  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:54:47.545633  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:54:47.545687  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:54:47.546623  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:54:47.562014  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:54:47.577214  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:54:47.592255  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:54:47.607206  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:54:47.621996  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:54:47.637131  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:54:47.652557  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:54:47.667610  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:54:47.682608  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:54:47.697613  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:54:47.712546  263219 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:54:47.723560  263219 ssh_runner.go:149] Run: openssl version
	I0814 09:54:47.728074  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:54:47.734595  263219 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:54:47.737470  263219 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:54:47.737504  263219 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:54:47.741923  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:54:47.748540  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:54:47.755676  263219 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:54:47.758734  263219 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:54:47.758778  263219 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:54:47.763208  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:54:47.769070  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:54:47.775553  263219 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:54:47.778275  263219 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:54:47.778311  263219 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:54:47.782612  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:54:47.788618  263219 kubeadm.go:390] StartCluster: {Name:newest-cni-20210814095308-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apise
rver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:54:47.788714  263219 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:54:47.788753  263219 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:54:47.810759  263219 cri.go:76] found id: "3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08"
	I0814 09:54:47.810776  263219 cri.go:76] found id: "5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf"
	I0814 09:54:47.810780  263219 cri.go:76] found id: "8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0"
	I0814 09:54:47.810784  263219 cri.go:76] found id: "7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581"
	I0814 09:54:47.810787  263219 cri.go:76] found id: "2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4"
	I0814 09:54:47.810791  263219 cri.go:76] found id: "97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce"
	I0814 09:54:47.810795  263219 cri.go:76] found id: ""
	I0814 09:54:47.810820  263219 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:54:47.823695  263219 cri.go:103] JSON = null
	W0814 09:54:47.823735  263219 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0814 09:54:47.823781  263219 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:54:47.829711  263219 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0814 09:54:47.829729  263219 kubeadm.go:600] restartCluster start
	I0814 09:54:47.829756  263219 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0814 09:54:47.835604  263219 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:47.836581  263219 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210814095308-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:54:47.836858  263219 kubeconfig.go:128] "newest-cni-20210814095308-6746" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig - will repair!
	I0814 09:54:47.837258  263219 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:54:47.839973  263219 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:54:47.846358  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:47.846402  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:47.857824  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.058152  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.058241  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.071639  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.258909  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.258985  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.272459  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.458734  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.458826  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.471961  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.658250  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.658339  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.671874  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.858176  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.858240  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.871258  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.058537  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.058613  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.072434  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.258696  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.258762  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.272007  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.458364  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.458436  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.470778  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.657968  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.658042  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.670775  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.858030  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.858102  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.870762  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:47.757543  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:50.258429  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:50.058443  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.058505  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.071297  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.258488  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.258548  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.271635  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.458924  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.458992  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.471709  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.657922  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.657992  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.671178  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.858417  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.858476  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.871478  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.871498  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.871533  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.883639  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.883663  263219 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0814 09:54:50.883669  263219 kubeadm.go:1032] stopping kube-system containers ...
	I0814 09:54:50.883681  263219 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:54:50.883723  263219 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:54:50.922065  263219 cri.go:76] found id: "3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08"
	I0814 09:54:50.922090  263219 cri.go:76] found id: "5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf"
	I0814 09:54:50.922097  263219 cri.go:76] found id: "8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0"
	I0814 09:54:50.922103  263219 cri.go:76] found id: "7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581"
	I0814 09:54:50.922108  263219 cri.go:76] found id: "2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4"
	I0814 09:54:50.922112  263219 cri.go:76] found id: "97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce"
	I0814 09:54:50.922115  263219 cri.go:76] found id: ""
	I0814 09:54:50.922120  263219 cri.go:221] Stopping containers: [3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08 5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf 8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0 7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581 2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4 97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce]
	I0814 09:54:50.922168  263219 ssh_runner.go:149] Run: which crictl
	I0814 09:54:50.924855  263219 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08 5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf 8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0 7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581 2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4 97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce
	I0814 09:54:50.946244  263219 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0814 09:54:50.955189  263219 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:54:50.961482  263219 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 14 09:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 14 09:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 14 09:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 14 09:53 /etc/kubernetes/scheduler.conf
	
	I0814 09:54:50.961534  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 09:54:50.967471  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 09:54:50.973384  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 09:54:50.979163  263219 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.979211  263219 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:54:50.984910  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 09:54:50.990763  263219 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.990811  263219 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:54:50.996329  263219 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:54:51.002220  263219 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0814 09:54:51.002246  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.043434  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.653259  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.772704  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.822706  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.873247  263219 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:54:51.873311  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:52.410014  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:52.909983  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:53.410543  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:53.910003  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:54.409809  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:54.910029  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:52.757954  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:55.257696  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:55.409812  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:55.909793  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:56.410716  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:56.910783  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:57.409799  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:57.910366  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:58.409907  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:58.425524  263219 api_server.go:70] duration metric: took 6.552278205s to wait for apiserver process to appear ...
	I0814 09:54:58.425549  263219 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:54:58.425558  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:54:57.258496  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:59.258629  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:01.483437  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 09:55:01.483463  263219 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 09:55:01.984132  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:01.988491  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:55:01.988512  263219 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:55:02.484027  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:02.488941  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:55:02.488965  263219 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:55:02.984527  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:02.989064  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:55:02.994840  263219 api_server.go:139] control plane version: v1.22.0-rc.0
	I0814 09:55:02.994860  263219 api_server.go:129] duration metric: took 4.569305048s to wait for apiserver health ...
	I0814 09:55:02.994869  263219 cni.go:93] Creating CNI manager for ""
	I0814 09:55:02.994875  263219 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:55:02.996662  263219 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:55:02.996709  263219 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:55:03.000415  263219 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0814 09:55:03.000439  263219 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:55:03.013912  263219 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:55:03.183787  263219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:55:03.194795  263219 system_pods.go:59] 9 kube-system pods found
	I0814 09:55:03.194829  263219 system_pods.go:61] "coredns-78fcd69978-gz25q" [20b6b7da-c5c2-4631-8357-1a6ffaba0b3f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.194836  263219 system_pods.go:61] "etcd-newest-cni-20210814095308-6746" [1d958fd9-c1c4-4211-8ceb-d9f0c8d19ede] Running
	I0814 09:55:03.194842  263219 system_pods.go:61] "kindnet-qmdzb" [515a8aac-189c-47a3-9a37-3210ef5cfd44] Running
	I0814 09:55:03.194846  263219 system_pods.go:61] "kube-apiserver-newest-cni-20210814095308-6746" [0810d2fb-3ed7-43f1-8978-ab9fbf53f8f5] Running
	I0814 09:55:03.194850  263219 system_pods.go:61] "kube-controller-manager-newest-cni-20210814095308-6746" [ce6fab88-2ecd-4927-9a7d-a74284dadee2] Running
	I0814 09:55:03.194854  263219 system_pods.go:61] "kube-proxy-5jhgl" [1c1f42ab-2957-447b-9911-2959da7ffe6d] Running
	I0814 09:55:03.194859  263219 system_pods.go:61] "kube-scheduler-newest-cni-20210814095308-6746" [4bd13c2a-5f90-4e9a-897f-5d58ee4467e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:55:03.194865  263219 system_pods.go:61] "metrics-server-7c784ccb57-lq9bb" [4173e8b0-e67c-4d55-aa26-732c3e6ff081] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.194874  263219 system_pods.go:61] "storage-provisioner" [9a8a638e-e802-41aa-9fca-d4ed41608c70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.194879  263219 system_pods.go:74] duration metric: took 11.070687ms to wait for pod list to return data ...
	I0814 09:55:03.194889  263219 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:55:03.197888  263219 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:55:03.197921  263219 node_conditions.go:123] node cpu capacity is 8
	I0814 09:55:03.197936  263219 node_conditions.go:105] duration metric: took 3.042815ms to run NodePressure ...
	I0814 09:55:03.197951  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:55:03.342948  263219 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:55:03.356692  263219 ops.go:34] apiserver oom_adj: -16
	I0814 09:55:03.356709  263219 kubeadm.go:604] restartCluster took 15.526974986s
	I0814 09:55:03.356716  263219 kubeadm.go:392] StartCluster complete in 15.568106207s
	I0814 09:55:03.356731  263219 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:55:03.356837  263219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:55:03.357667  263219 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:55:03.361521  263219 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210814095308-6746" rescaled to 1
	I0814 09:55:03.361574  263219 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0814 09:55:03.361595  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:55:03.363379  263219 out.go:177] * Verifying Kubernetes components...
	I0814 09:55:03.363434  263219 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:55:03.361664  263219 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0814 09:55:03.363506  263219 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.363528  263219 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.363538  263219 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:55:03.363572  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.363577  263219 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.361786  263219 config.go:177] Loaded profile config "newest-cni-20210814095308-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:55:03.363581  263219 addons.go:59] Setting dashboard=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.363592  263219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210814095308-6746"
	I0814 09:55:03.363604  263219 addons.go:135] Setting addon dashboard=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.363623  263219 addons.go:147] addon dashboard should already be in state true
	I0814 09:55:03.363652  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.363606  263219 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.363732  263219 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.363746  263219 addons.go:147] addon metrics-server should already be in state true
	I0814 09:55:03.363781  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.363914  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.364095  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.364115  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.364234  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.416970  263219 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0814 09:55:03.418696  263219 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0814 09:55:03.418775  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0814 09:55:03.418788  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0814 09:55:03.418847  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.420770  263219 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0814 09:55:03.422926  263219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:55:03.423077  263219 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:55:03.423089  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:55:03.423141  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.420854  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 09:55:03.423480  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0814 09:55:03.423549  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.428439  263219 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.428461  263219 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:55:03.428490  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.428902  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.443739  263219 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0814 09:55:03.449440  263219 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:55:03.449770  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:55:03.469347  263219 api_server.go:70] duration metric: took 107.743895ms to wait for apiserver process to appear ...
	I0814 09:55:03.469372  263219 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:55:03.469383  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:03.476321  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:55:03.477254  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.477304  263219 api_server.go:139] control plane version: v1.22.0-rc.0
	I0814 09:55:03.477323  263219 api_server.go:129] duration metric: took 7.944897ms to wait for apiserver health ...
	I0814 09:55:03.477335  263219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:55:03.478668  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.483131  263219 system_pods.go:59] 9 kube-system pods found
	I0814 09:55:03.483167  263219 system_pods.go:61] "coredns-78fcd69978-gz25q" [20b6b7da-c5c2-4631-8357-1a6ffaba0b3f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.483178  263219 system_pods.go:61] "etcd-newest-cni-20210814095308-6746" [1d958fd9-c1c4-4211-8ceb-d9f0c8d19ede] Running
	I0814 09:55:03.483194  263219 system_pods.go:61] "kindnet-qmdzb" [515a8aac-189c-47a3-9a37-3210ef5cfd44] Running
	I0814 09:55:03.483201  263219 system_pods.go:61] "kube-apiserver-newest-cni-20210814095308-6746" [0810d2fb-3ed7-43f1-8978-ab9fbf53f8f5] Running
	I0814 09:55:03.483211  263219 system_pods.go:61] "kube-controller-manager-newest-cni-20210814095308-6746" [ce6fab88-2ecd-4927-9a7d-a74284dadee2] Running
	I0814 09:55:03.483220  263219 system_pods.go:61] "kube-proxy-5jhgl" [1c1f42ab-2957-447b-9911-2959da7ffe6d] Running
	I0814 09:55:03.483232  263219 system_pods.go:61] "kube-scheduler-newest-cni-20210814095308-6746" [4bd13c2a-5f90-4e9a-897f-5d58ee4467e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:55:03.483243  263219 system_pods.go:61] "metrics-server-7c784ccb57-lq9bb" [4173e8b0-e67c-4d55-aa26-732c3e6ff081] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.483255  263219 system_pods.go:61] "storage-provisioner" [9a8a638e-e802-41aa-9fca-d4ed41608c70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.483265  263219 system_pods.go:74] duration metric: took 5.921097ms to wait for pod list to return data ...
	I0814 09:55:03.483278  263219 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:55:03.485639  263219 default_sa.go:45] found service account: "default"
	I0814 09:55:03.485662  263219 default_sa.go:55] duration metric: took 2.376939ms for default service account to be created ...
	I0814 09:55:03.485672  263219 kubeadm.go:547] duration metric: took 124.073585ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0814 09:55:03.485696  263219 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:55:03.486516  263219 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:55:03.486534  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:55:03.486615  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.487650  263219 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:55:03.487674  263219 node_conditions.go:123] node cpu capacity is 8
	I0814 09:55:03.487690  263219 node_conditions.go:105] duration metric: took 1.988394ms to run NodePressure ...
	I0814 09:55:03.487701  263219 start.go:231] waiting for startup goroutines ...
	I0814 09:55:03.488462  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.529983  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.577421  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:55:03.577541  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0814 09:55:03.577564  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0814 09:55:03.581019  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 09:55:03.581036  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0814 09:55:03.590270  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0814 09:55:03.590287  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0814 09:55:03.593085  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 09:55:03.593102  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0814 09:55:03.602634  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0814 09:55:03.602650  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0814 09:55:03.605498  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:55:03.605514  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0814 09:55:03.615981  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0814 09:55:03.615999  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0814 09:55:03.618407  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:55:03.629586  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0814 09:55:03.629605  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0814 09:55:03.630364  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:55:03.642536  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0814 09:55:03.642553  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0814 09:55:03.718968  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0814 09:55:03.718994  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0814 09:55:03.733810  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0814 09:55:03.733838  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0814 09:55:03.815012  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:55:03.815038  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0814 09:55:03.832642  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:55:04.108364  263219 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210814095308-6746"
	I0814 09:55:04.243099  263219 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0814 09:55:04.243126  263219 addons.go:344] enableAddons completed in 881.466189ms
	I0814 09:55:04.289541  263219 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0814 09:55:04.291219  263219 out.go:177] 
	W0814 09:55:04.291353  263219 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0814 09:55:04.292703  263219 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0814 09:55:04.294148  263219 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210814095308-6746" cluster and "default" namespace by default
	I0814 09:55:01.757880  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:03.758948  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:06.258001  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	ef09908540d7f       6de166512aa22       4 seconds ago        Running             kindnet-cni               1                   9c614b223bfca
	5bacdc27c2e81       ea6b13ed84e03       5 seconds ago        Running             kube-proxy                1                   3f81a33e54991
	d855c2c39957c       cf9cba6c3e4a8       9 seconds ago        Running             kube-controller-manager   1                   1801c8f895d52
	4fe28eb2dc10e       b2462aa94d403       9 seconds ago        Running             kube-apiserver            1                   9bcaab314a378
	c8f7521d5e47e       7da2efaa5b480       9 seconds ago        Running             kube-scheduler            1                   be7b27c626e04
	e8a692b02eb5f       0048118155842       9 seconds ago        Running             etcd                      1                   17cf182148edb
	3cf010f2c16cd       6de166512aa22       59 seconds ago       Exited              kindnet-cni               0                   83bcd0e47a241
	5e4d53be68daa       ea6b13ed84e03       59 seconds ago       Exited              kube-proxy                0                   2ad25b7db0173
	8934417c26f11       0048118155842       About a minute ago   Exited              etcd                      0                   ab8031108aef3
	7812503546803       cf9cba6c3e4a8       About a minute ago   Exited              kube-controller-manager   0                   7c26f97724898
	2f5be69ea0aa5       7da2efaa5b480       About a minute ago   Exited              kube-scheduler            0                   3fe6609137187
	97422526a92f0       b2462aa94d403       About a minute ago   Exited              kube-apiserver            0                   57e14a4b09668
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:54:31 UTC, end at Sat 2021-08-14 09:55:07 UTC. --
	Aug 14 09:54:58 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:54:58.019264749Z" level=info msg="StartContainer for \"d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83\" returns successfully"
	Aug 14 09:54:58 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:54:58.019276941Z" level=info msg="StartContainer for \"4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea\" returns successfully"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:01.524615635Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209405723Z" level=info msg="StopPodSandbox for \"2ad25b7db0173b319378331996118dff8dea36520fe0fb9199772a879e112a92\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209510101Z" level=info msg="Container to stop \"5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209608040Z" level=info msg="TearDown network for sandbox \"2ad25b7db0173b319378331996118dff8dea36520fe0fb9199772a879e112a92\" successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209620641Z" level=info msg="StopPodSandbox for \"2ad25b7db0173b319378331996118dff8dea36520fe0fb9199772a879e112a92\" returns successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210007541Z" level=info msg="StopPodSandbox for \"83bcd0e47a241079722a0d2db557637f5ab94b693118d620ddd690781e1362d6\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210056302Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-5jhgl,Uid:1c1f42ab-2957-447b-9911-2959da7ffe6d,Namespace:kube-system,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210072364Z" level=info msg="Container to stop \"3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210246628Z" level=info msg="TearDown network for sandbox \"83bcd0e47a241079722a0d2db557637f5ab94b693118d620ddd690781e1362d6\" successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210261376Z" level=info msg="StopPodSandbox for \"83bcd0e47a241079722a0d2db557637f5ab94b693118d620ddd690781e1362d6\" returns successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210686818Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-qmdzb,Uid:515a8aac-189c-47a3-9a37-3210ef5cfd44,Namespace:kube-system,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.232666206Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 pid=1193
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.233264169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e pid=1195
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.378762535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jhgl,Uid:1c1f42ab-2957-447b-9911-2959da7ffe6d,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.381197333Z" level=info msg="CreateContainer within sandbox \"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.438777842Z" level=info msg="CreateContainer within sandbox \"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.439174939Z" level=info msg="StartContainer for \"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.603698417Z" level=info msg="StartContainer for \"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e\" returns successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.704590004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-qmdzb,Uid:515a8aac-189c-47a3-9a37-3210ef5cfd44,Namespace:kube-system,Attempt:1,} returns sandbox id \"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.710245343Z" level=info msg="CreateContainer within sandbox \"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.775137530Z" level=info msg="CreateContainer within sandbox \"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.775581594Z" level=info msg="StartContainer for \"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92\""
	Aug 14 09:55:03 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:03.014567862Z" level=info msg="StartContainer for \"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.035630] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth0b3713f0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 08 7c e8 24 aa 08 06        ........|.$...
	[  +0.851133] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000003] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +2.011842] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +4.227682] net_ratelimit: 2 callbacks suppressed
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000000] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +8.187413] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000025] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[Aug14 09:53] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:54] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0] <==
	* {"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.800980327s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.795983314s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[830146092] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:85; }","duration":"1.801055769s","start":"2021-08-14T09:53:50.512Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[830146092] 'agreement among raft nodes before linearized reading'  (duration: 1.798057939s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.606273997s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-20210814095308-6746\" ","response":"range_response_count:1 size:6222"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[286981371] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:85; }","duration":"1.796010171s","start":"2021-08-14T09:53:50.517Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[286981371] 'agreement among raft nodes before linearized reading'  (duration: 1.792999205s)"],"step_count":1}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[1329788645] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-20210814095308-6746; range_end:; response_count:1; response_revision:85; }","duration":"1.60630003s","start":"2021-08-14T09:53:50.706Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[1329788645] 'agreement among raft nodes before linearized reading'  (duration: 1.603220066s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.517Z","time spent":"1.796076056s","remote":"127.0.0.1:33498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.706Z","time spent":"1.60635546s","remote":"127.0.0.1:33352","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":6245,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-20210814095308-6746\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.512Z","time spent":"1.80110827s","remote":"127.0.0.1:33346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":0,"response size":27,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"857.174879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[1164043677] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:0; response_revision:85; }","duration":"857.705329ms","start":"2021-08-14T09:53:51.455Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[1164043677] 'agreement among raft nodes before linearized reading'  (duration: 854.229493ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:51.455Z","time spent":"857.752348ms","remote":"127.0.0.1:33442","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":27,"request content":"key:\"/registry/clusterrolebindings/system:discovery\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.725474595s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[1784313380] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:85; }","duration":"1.726138835s","start":"2021-08-14T09:53:50.587Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[1784313380] 'agreement among raft nodes before linearized reading'  (duration: 1.722545516s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.587Z","time spent":"1.726207349s","remote":"127.0.0.1:33498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"868.126105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-14T09:54:02.888Z","caller":"traceutil/trace.go:171","msg":"trace[105308563] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:355; }","duration":"868.242058ms","start":"2021-08-14T09:54:02.020Z","end":"2021-08-14T09:54:02.888Z","steps":["trace[105308563] 'range keys from in-memory index tree'  (duration: 868.046874ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:54:02.019Z","time spent":"868.306693ms","remote":"127.0.0.1:33354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":28,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"918.257504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210814095308-6746\" ","response":"range_response_count:1 size:4247"}
	{"level":"info","ts":"2021-08-14T09:54:02.888Z","caller":"traceutil/trace.go:171","msg":"trace[759917799] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-20210814095308-6746; range_end:; response_count:1; response_revision:355; }","duration":"918.304691ms","start":"2021-08-14T09:54:01.970Z","end":"2021-08-14T09:54:02.888Z","steps":["trace[759917799] 'range keys from in-memory index tree'  (duration: 918.123987ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:54:01.970Z","time spent":"918.356228ms","remote":"127.0.0.1:33352","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":4270,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210814095308-6746\" "}
	{"level":"warn","ts":"2021-08-14T09:54:07.018Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.689781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4667"}
	{"level":"info","ts":"2021-08-14T09:54:07.018Z","caller":"traceutil/trace.go:171","msg":"trace[2011038256] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:437; }","duration":"100.812008ms","start":"2021-08-14T09:54:06.917Z","end":"2021-08-14T09:54:07.018Z","steps":["trace[2011038256] 'agreement among raft nodes before linearized reading'  (duration: 100.658129ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:54:07.018Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.282663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-78fcd69978\" ","response":"range_response_count:1 size:3633"}
	{"level":"info","ts":"2021-08-14T09:54:07.018Z","caller":"traceutil/trace.go:171","msg":"trace[1959183368] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-78fcd69978; range_end:; response_count:1; response_revision:437; }","duration":"100.340337ms","start":"2021-08-14T09:54:06.918Z","end":"2021-08-14T09:54:07.018Z","steps":["trace[1959183368] 'agreement among raft nodes before linearized reading'  (duration: 100.259805ms)"],"step_count":1}
	
	* 
	* ==> etcd [e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da] <==
	* {"level":"info","ts":"2021-08-14T09:54:57.966Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-14T09:54:57.967Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.0","cluster-id":"3a56e4ca95e2355c","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:54:57.967Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-14T09:54:57.968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2021-08-14T09:54:57.968Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2021-08-14T09:54:57.968Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.301Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20210814095308-6746 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-14T09:54:58.301Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:54:58.301Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  09:55:17 up  1:37,  0 users,  load average: 2.03, 1.62, 1.69
	Linux newest-cni-20210814095308-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea] <==
	* I0814 09:55:01.466564       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0814 09:55:01.466619       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0814 09:55:01.514545       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 09:55:01.515262       1 shared_informer.go:247] Caches are synced for node_authorizer 
	E0814 09:55:01.514724       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0814 09:55:01.600893       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0814 09:55:01.601200       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0814 09:55:01.601826       1 cache.go:39] Caches are synced for autoregister controller
	I0814 09:55:01.602087       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0814 09:55:01.600906       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0814 09:55:01.602622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 09:55:01.630263       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0814 09:55:02.465126       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0814 09:55:02.465286       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0814 09:55:02.469361       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0814 09:55:03.178490       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 09:55:03.271689       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 09:55:03.285155       1 controller.go:611] quota admission added evaluator for: deployments.apps
	W0814 09:55:03.317842       1 handler_proxy.go:104] no RequestInfo found in the context
	E0814 09:55:03.317922       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0814 09:55:03.317933       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 09:55:03.330565       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 09:55:03.335501       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 09:55:04.156337       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-apiserver [97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce] <==
	* I0814 09:53:52.314358       1 trace.go:205] Trace[1226201223]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-newest-cni-20210814095308-6746,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:56041f4f-ac38-49f6-a1e2-d7ea76edae33,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:53:50.706) (total time: 1608ms):
	Trace[1226201223]: ---"About to write a response" 1607ms (09:53:52.313)
	Trace[1226201223]: [1.608028787s] [1.608028787s] END
	I0814 09:53:52.314484       1 trace.go:205] Trace[1459702612]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:55520a25-3343-4135-9bed-f5cf7615be97,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:53:51.455) (total time: 859ms):
	Trace[1459702612]: [859.120585ms] [859.120585ms] END
	I0814 09:53:52.314488       1 trace.go:205] Trace[1387414596]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:433b4818-a07d-4eb2-857f-986f1288d8e6,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:53:50.511) (total time: 1802ms):
	Trace[1387414596]: [1.802769395s] [1.802769395s] END
	I0814 09:53:52.314998       1 trace.go:205] Trace[845452184]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:104569a6-661a-4e2f-8664-faa5870bff22,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:53:50.460) (total time: 1854ms):
	Trace[845452184]: [1.85434696s] [1.85434696s] END
	I0814 09:53:52.810044       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 09:53:52.839147       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0814 09:53:52.932537       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0814 09:53:52.933399       1 controller.go:611] quota admission added evaluator for: endpoints
	I0814 09:53:52.936454       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 09:53:53.125739       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 09:53:54.305778       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0814 09:53:54.333804       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 09:53:59.402762       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 09:54:02.889388       1 trace.go:205] Trace[940905341]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-newest-cni-20210814095308-6746,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:d230ffa0-e61c-4074-9439-daa373a87415,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:54:01.969) (total time: 919ms):
	Trace[940905341]: ---"About to write a response" 919ms (09:54:02.889)
	Trace[940905341]: [919.730292ms] [919.730292ms] END
	I0814 09:54:02.890264       1 trace.go:205] Trace[1167629556]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:2e7b84b4-e9c9-4d72-8e2a-bede6d4f1a99,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:54:02.019) (total time: 870ms):
	Trace[1167629556]: [870.652983ms] [870.652983ms] END
	I0814 09:54:06.735555       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0814 09:54:06.811363       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581] <==
	* I0814 09:54:06.801087       1 disruption.go:371] Sending events to api server.
	I0814 09:54:06.801067       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0814 09:54:06.804418       1 range_allocator.go:373] Set node newest-cni-20210814095308-6746 PodCIDR to [192.168.0.0/24]
	I0814 09:54:06.815212       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-xsdxs"
	I0814 09:54:06.821565       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5jhgl"
	I0814 09:54:06.821868       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-gz25q"
	I0814 09:54:06.821892       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qmdzb"
	I0814 09:54:06.829111       1 event.go:291] "Event occurred" object="kube-dns" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I0814 09:54:06.901007       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0814 09:54:06.959544       1 shared_informer.go:247] Caches are synced for cronjob 
	I0814 09:54:06.964966       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:54:06.976237       1 shared_informer.go:247] Caches are synced for job 
	I0814 09:54:06.976238       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0814 09:54:06.976261       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0814 09:54:06.976296       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0814 09:54:06.985497       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:54:07.123527       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0814 09:54:07.130744       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-xsdxs"
	I0814 09:54:07.365769       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:54:07.365791       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:54:07.411379       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:54:08.869610       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0814 09:54:08.877288       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:54:08.903043       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:54:08.910364       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-lq9bb"
	
	* 
	* ==> kube-controller-manager [d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83] <==
	* I0814 09:54:59.403511       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0814 09:55:04.711533       1 request.go:665] Waited for 1.00834481s due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/apis/discovery.k8s.io/v1beta1?timeout=32s
	E0814 09:55:05.112296       1 controllermanager.go:467] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0814 09:55:05.113192       1 shared_informer.go:240] Waiting for caches to sync for tokens
	E0814 09:55:05.131483       1 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0814 09:55:05.131595       1 controllermanager.go:577] Started "namespace"
	I0814 09:55:05.131678       1 namespace_controller.go:200] Starting namespace controller
	I0814 09:55:05.131700       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0814 09:55:05.138145       1 controllermanager.go:577] Started "horizontalpodautoscaling"
	I0814 09:55:05.138247       1 horizontal.go:169] Starting HPA controller
	I0814 09:55:05.138267       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0814 09:55:05.140391       1 controllermanager.go:577] Started "cronjob"
	I0814 09:55:05.140571       1 cronjob_controllerv2.go:125] "Starting cronjob controller v2"
	I0814 09:55:05.140588       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0814 09:55:05.143454       1 controllermanager.go:577] Started "clusterrole-aggregation"
	I0814 09:55:05.143606       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
	I0814 09:55:05.143618       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0814 09:55:05.145522       1 controllermanager.go:577] Started "attachdetach"
	I0814 09:55:05.145636       1 attach_detach_controller.go:328] Starting attach detach controller
	I0814 09:55:05.145644       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0814 09:55:05.147453       1 controllermanager.go:577] Started "job"
	I0814 09:55:05.147473       1 job_controller.go:172] Starting job controller
	I0814 09:55:05.147482       1 shared_informer.go:240] Waiting for caches to sync for job
	I0814 09:55:05.149431       1 node_ipam_controller.go:91] Sending events to api server.
	I0814 09:55:05.213720       1 shared_informer.go:247] Caches are synced for tokens 
	
	* 
	* ==> kube-proxy [5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e] <==
	* I0814 09:55:02.643771       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:55:02.643819       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:55:02.643838       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0814 09:55:02.720975       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:55:02.721008       1 server_others.go:212] Using iptables Proxier.
	I0814 09:55:02.721021       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:55:02.721035       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:55:02.721390       1 server.go:649] Version: v1.22.0-rc.0
	I0814 09:55:02.722068       1 config.go:224] Starting endpoint slice config controller
	I0814 09:55:02.722159       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0814 09:55:02.722198       1 config.go:315] Starting service config controller
	I0814 09:55:02.722202       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0814 09:55:02.725229       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210814095308-6746.169b23a9defa06c9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03e029dab07e501, ext:151845569, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210814095308-6746", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"newest-cni-20210814095308-6746", UID:"newest-cni-20210814095308-6746", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210814095308-6746.169b23a9defa06c9" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0814 09:55:02.822378       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:55:02.822383       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf] <==
	* I0814 09:54:08.129166       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:54:08.129235       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:54:08.129260       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0814 09:54:08.210544       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:54:08.210599       1 server_others.go:212] Using iptables Proxier.
	I0814 09:54:08.210617       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:54:08.210634       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:54:08.211381       1 server.go:649] Version: v1.22.0-rc.0
	I0814 09:54:08.212272       1 config.go:315] Starting service config controller
	I0814 09:54:08.212480       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:54:08.212592       1 config.go:224] Starting endpoint slice config controller
	I0814 09:54:08.212605       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0814 09:54:08.217120       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210814095308-6746.169b239d2df56c32", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03e02900ca9e336, ext:206518773, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210814095308-6746", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"newest-cni-20210814095308-6746", UID:"newest-cni-20210814095308-6746", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210814095308-6746.169b239d2df56c32" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0814 09:54:08.312890       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:54:08.312965       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4] <==
	* E0814 09:53:46.563491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:46.632122       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 09:53:46.673541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:53:46.713921       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:53:47.940476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:53:48.118504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:53:48.130646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:53:48.172018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:48.292783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:53:48.644211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:48.976570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:53:49.077462       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:53:49.111558       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:49.117545       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:53:49.149747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:49.275716       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:53:49.293783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:53:49.521838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 09:53:49.554067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:53:51.948233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:51.970406       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:53:51.995506       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:53:52.135637       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:53:52.503115       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0814 09:54:00.918315       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc] <==
	* W0814 09:54:58.112568       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0814 09:54:58.614473       1 serving.go:347] Generated self-signed cert in-memory
	W0814 09:55:01.484562       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 09:55:01.484598       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 09:55:01.484610       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 09:55:01.484618       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 09:55:01.527873       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0814 09:55:01.527994       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 09:55:01.528015       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:55:01.528028       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0814 09:55:01.606685       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0814 09:55:01.606952       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0814 09:55:01.628136       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:54:31 UTC, end at Sat 2021-08-14 09:55:17 UTC. --
	Aug 14 09:55:00 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:00.929239     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.030140     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.130885     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.231433     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.331980     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.432837     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.523020     712 kubelet_node_status.go:109] "Node was previously registered" node="newest-cni-20210814095308-6746"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.523126     712 kubelet_node_status.go:74] "Successfully registered node" node="newest-cni-20210814095308-6746"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.524218     712 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.524924     712 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.902050     712 apiserver.go:52] "Watching apiserver"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.906955     712 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.907080     712 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011536     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/515a8aac-189c-47a3-9a37-3210ef5cfd44-xtables-lock\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011580     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c1f42ab-2957-447b-9911-2959da7ffe6d-xtables-lock\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011601     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c1f42ab-2957-447b-9911-2959da7ffe6d-lib-modules\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011657     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/515a8aac-189c-47a3-9a37-3210ef5cfd44-lib-modules\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011685     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/515a8aac-189c-47a3-9a37-3210ef5cfd44-cni-cfg\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011773     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfq5j\" (UniqueName: \"kubernetes.io/projected/515a8aac-189c-47a3-9a37-3210ef5cfd44-kube-api-access-vfq5j\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011822     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c1f42ab-2957-447b-9911-2959da7ffe6d-kube-proxy\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011853     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfsp6\" (UniqueName: \"kubernetes.io/projected/1c1f42ab-2957-447b-9911-2959da7ffe6d-kube-api-access-bfsp6\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011874     712 reconciler.go:157] "Reconciler: start to sync state"
	Aug 14 09:55:05 newest-cni-20210814095308-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:55:05 newest-cni-20210814095308-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:55:05 newest-cni-20210814095308-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:55:17.520664  267295 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210814095308-6746
helpers_test.go:236: (dbg) docker inspect newest-cni-20210814095308-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991",
	        "Created": "2021-08-14T09:53:10.1692035Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263769,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:54:31.652141454Z",
	            "FinishedAt": "2021-08-14T09:54:29.33137841Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/hostname",
	        "HostsPath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/hosts",
	        "LogPath": "/var/lib/docker/containers/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991/00d6fdeb074bc0b8ec5d4b253f92262cd9437839e0078fbbb82091ac9355a991-json.log",
	        "Name": "/newest-cni-20210814095308-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20210814095308-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210814095308-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6f26a3f6f0854d877c8a3a44da39c93d68585ba02b9ceed30dc7d8403a087a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210814095308-6746",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210814095308-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210814095308-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210814095308-6746",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210814095308-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "010e8632f9b86b5c6acf54143964fcf7624f3525f114dcb1789424ea05fb3eb4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/010e8632f9b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210814095308-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "00d6fdeb074b"
	                    ],
	                    "NetworkID": "405b862b9dfe05154028ef97fb7bca891d96549290a433a955934c71cb864401",
	                    "EndpointID": "9f98f403d076b32d4a427c735f816e86decf4fa3ca7ad1bcdd7d5c1719345134",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210814095308-6746 -n newest-cni-20210814095308-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210814095308-6746 -n newest-cni-20210814095308-6746: exit status 2 (302.396746ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210814095308-6746 logs -n 25
E0814 09:55:24.455892    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210814095308-6746 logs -n 25: exit status 110 (10.878157787s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:25 UTC | Sat, 14 Aug 2021 09:44:41 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:49 UTC | Sat, 14 Aug 2021 09:44:50 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                            | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:50 UTC | Sat, 14 Aug 2021 09:44:51 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:44:51 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:45:12 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210814094108-6746                          | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:43:10 UTC | Sat, 14 Aug 2021 09:48:31 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:48:45 UTC | Sat, 14 Aug 2021 09:48:45 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:34 UTC | Sat, 14 Aug 2021 09:50:38 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:38 UTC | Sat, 14 Aug 2021 09:50:39 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20210814095039-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:39 UTC | Sat, 14 Aug 2021 09:50:40 UTC |
	|         | disable-driver-mounts-20210814095039-6746                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:50:56 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                            | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:06 UTC | Sat, 14 Aug 2021 09:51:07 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:08 UTC | Sat, 14 Aug 2021 09:51:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:40 UTC | Sat, 14 Aug 2021 09:51:36 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:51:45 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:52:06 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:04 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:54:08 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:08 UTC | Sat, 14 Aug 2021 09:54:09 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:09 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:04 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:54:29
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:54:29.972378  263219 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:54:29.972462  263219 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:29.972475  263219 out.go:311] Setting ErrFile to fd 2...
	I0814 09:54:29.972479  263219 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:29.972573  263219 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:54:29.972846  263219 out.go:305] Setting JSON to false
	I0814 09:54:30.009462  263219 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5832,"bootTime":1628929038,"procs":267,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:54:30.009542  263219 start.go:121] virtualization: kvm guest
	I0814 09:54:30.011519  263219 out.go:177] * [newest-cni-20210814095308-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:54:30.013033  263219 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:54:30.011654  263219 notify.go:169] Checking for updates...
	I0814 09:54:30.014430  263219 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:54:30.015850  263219 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:54:30.017244  263219 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:54:30.017641  263219 config.go:177] Loaded profile config "newest-cni-20210814095308-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:54:30.018126  263219 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:54:30.066375  263219 docker.go:132] docker version: linux-19.03.15
	I0814 09:54:30.066454  263219 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:54:30.145259  263219 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:54:30.101615392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:54:30.145366  263219 docker.go:244] overlay module found
	I0814 09:54:30.147397  263219 out.go:177] * Using the docker driver based on existing profile
	I0814 09:54:30.147421  263219 start.go:278] selected driver: docker
	I0814 09:54:30.147429  263219 start.go:751] validating driver "docker" against &{Name:newest-cni-20210814095308-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] Verify
Components:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:54:30.147526  263219 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:54:30.147562  263219 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:54:30.147578  263219 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:54:30.148943  263219 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:54:30.149784  263219 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:54:30.228921  263219 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:54:30.185568693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0814 09:54:30.229046  263219 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:54:30.229076  263219 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:54:30.231218  263219 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:54:30.231327  263219 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 09:54:30.231354  263219 cni.go:93] Creating CNI manager for ""
	I0814 09:54:30.231362  263219 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:54:30.231375  263219 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:54:30.231391  263219 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:54:30.231402  263219 start_flags.go:277] config:
	{Name:newest-cni-20210814095308-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kub
elet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:54:30.233150  263219 out.go:177] * Starting control plane node newest-cni-20210814095308-6746 in cluster newest-cni-20210814095308-6746
	I0814 09:54:30.233187  263219 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:54:30.234581  263219 out.go:177] * Pulling base image ...
	I0814 09:54:30.234612  263219 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:54:30.234649  263219 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:54:30.234666  263219 cache.go:56] Caching tarball of preloaded images
	I0814 09:54:30.234721  263219 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:54:30.234868  263219 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:54:30.234885  263219 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0814 09:54:30.235033  263219 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/config.json ...
	I0814 09:54:30.322925  263219 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:54:30.322948  263219 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:54:30.322968  263219 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:54:30.322999  263219 start.go:313] acquiring machines lock for newest-cni-20210814095308-6746: {Name:mka71e6cef7914d8cc25826ac188b3d65cc88bef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:54:30.323105  263219 start.go:317] acquired machines lock for "newest-cni-20210814095308-6746" in 61.331µs
	I0814 09:54:30.323124  263219 start.go:93] Skipping create...Using existing machine configuration
	I0814 09:54:30.323131  263219 fix.go:55] fixHost starting: 
	I0814 09:54:30.323368  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:54:30.365622  263219 fix.go:108] recreateIfNeeded on newest-cni-20210814095308-6746: state=Stopped err=<nil>
	W0814 09:54:30.365659  263219 fix.go:134] unexpected machine state, will restart: <nil>
	I0814 09:54:28.757277  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:30.759710  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:30.367988  263219 out.go:177] * Restarting existing docker container for "newest-cni-20210814095308-6746" ...
	I0814 09:54:30.368066  263219 cli_runner.go:115] Run: docker start newest-cni-20210814095308-6746
	I0814 09:54:31.659073  263219 cli_runner.go:168] Completed: docker start newest-cni-20210814095308-6746: (1.290967649s)
	I0814 09:54:31.659166  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:54:31.696414  263219 kic.go:420] container "newest-cni-20210814095308-6746" state is running.
	I0814 09:54:31.696848  263219 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210814095308-6746
	I0814 09:54:31.738430  263219 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/config.json ...
	I0814 09:54:31.738644  263219 machine.go:88] provisioning docker machine ...
	I0814 09:54:31.738681  263219 ubuntu.go:169] provisioning hostname "newest-cni-20210814095308-6746"
	I0814 09:54:31.738736  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:31.779017  263219 main.go:130] libmachine: Using SSH client type: native
	I0814 09:54:31.779246  263219 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0814 09:54:31.779273  263219 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210814095308-6746 && echo "newest-cni-20210814095308-6746" | sudo tee /etc/hostname
	I0814 09:54:31.779718  263219 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42876->127.0.0.1:32968: read: connection reset by peer
	I0814 09:54:34.911471  263219 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210814095308-6746
	
	I0814 09:54:34.911538  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:33.258147  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:35.757363  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:34.949564  263219 main.go:130] libmachine: Using SSH client type: native
	I0814 09:54:34.949802  263219 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0814 09:54:34.949842  263219 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210814095308-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210814095308-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210814095308-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:54:35.071890  263219 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:54:35.071921  263219 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:54:35.071945  263219 ubuntu.go:177] setting up certificates
	I0814 09:54:35.071954  263219 provision.go:83] configureAuth start
	I0814 09:54:35.072001  263219 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210814095308-6746
	I0814 09:54:35.110468  263219 provision.go:138] copyHostCerts
	I0814 09:54:35.110530  263219 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:54:35.110547  263219 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:54:35.110596  263219 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:54:35.110667  263219 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:54:35.110677  263219 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:54:35.110699  263219 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:54:35.110748  263219 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:54:35.110757  263219 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:54:35.110771  263219 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:54:35.110805  263219 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210814095308-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210814095308-6746]
	I0814 09:54:35.284923  263219 provision.go:172] copyRemoteCerts
	I0814 09:54:35.284990  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:54:35.285042  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.324256  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.411115  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:54:35.426265  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0814 09:54:35.441057  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:54:35.455785  263219 provision.go:86] duration metric: configureAuth took 383.821198ms
	I0814 09:54:35.455810  263219 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:54:35.455969  263219 config.go:177] Loaded profile config "newest-cni-20210814095308-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:54:35.455980  263219 machine.go:91] provisioned docker machine in 3.717319089s
	I0814 09:54:35.455987  263219 start.go:267] post-start starting for "newest-cni-20210814095308-6746" (driver="docker")
	I0814 09:54:35.455993  263219 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:54:35.456031  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:54:35.456067  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.494054  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.583272  263219 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:54:35.585764  263219 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:54:35.585796  263219 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:54:35.585805  263219 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:54:35.585810  263219 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:54:35.585818  263219 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:54:35.585859  263219 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:54:35.585928  263219 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:54:35.586011  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:54:35.591981  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:54:35.607225  263219 start.go:270] post-start completed in 151.226337ms
	I0814 09:54:35.607277  263219 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:54:35.607310  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.645581  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.732601  263219 fix.go:57] fixHost completed within 5.409464523s
	I0814 09:54:35.732629  263219 start.go:80] releasing machines lock for "newest-cni-20210814095308-6746", held for 5.409513777s
	I0814 09:54:35.732697  263219 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210814095308-6746
	I0814 09:54:35.772352  263219 ssh_runner.go:149] Run: systemctl --version
	I0814 09:54:35.772403  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.772429  263219 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:54:35.772485  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:54:35.811915  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.812173  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:54:35.896233  263219 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:54:35.922223  263219 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:54:35.930767  263219 docker.go:153] disabling docker service ...
	I0814 09:54:35.930808  263219 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:54:35.939528  263219 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:54:35.947344  263219 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:54:36.003011  263219 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:54:36.054666  263219 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:54:36.062627  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:54:36.073757  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:54:36.085148  263219 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:54:36.090731  263219 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:54:36.090767  263219 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:54:36.097400  263219 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:54:36.102890  263219 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:54:36.156054  263219 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:54:36.224541  263219 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:54:36.224605  263219 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:54:36.227881  263219 start.go:413] Will wait 60s for crictl version
	I0814 09:54:36.227926  263219 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:54:36.249600  263219 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:54:36Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:54:38.258254  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:40.757314  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:43.258416  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:45.757513  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:47.297983  263219 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:54:47.320600  263219 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:54:47.320649  263219 ssh_runner.go:149] Run: containerd --version
	I0814 09:54:47.343783  263219 ssh_runner.go:149] Run: containerd --version
	I0814 09:54:47.367813  263219 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0814 09:54:47.367887  263219 cli_runner.go:115] Run: docker network inspect newest-cni-20210814095308-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:54:47.405957  263219 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:54:47.409142  263219 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:54:47.420131  263219 out.go:177]   - kubelet.network-plugin=cni
	I0814 09:54:47.423383  263219 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0814 09:54:47.423457  263219 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:54:47.423512  263219 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:54:47.446811  263219 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:54:47.446829  263219 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:54:47.446862  263219 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:54:47.468474  263219 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:54:47.468493  263219 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:54:47.468529  263219 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:54:47.488870  263219 cni.go:93] Creating CNI manager for ""
	I0814 09:54:47.488893  263219 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:54:47.488904  263219 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0814 09:54:47.488917  263219 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210814095308-6746 NodeName:newest-cni-20210814095308-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-
elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:54:47.489049  263219 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210814095308-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:54:47.489142  263219 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210814095308-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0814 09:54:47.489186  263219 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0814 09:54:47.495406  263219 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:54:47.495466  263219 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:54:47.501317  263219 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (617 bytes)
	I0814 09:54:47.512521  263219 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0814 09:54:47.523490  263219 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0814 09:54:47.534519  263219 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:54:47.537142  263219 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:54:47.545276  263219 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746 for IP: 192.168.58.2
	I0814 09:54:47.545312  263219 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:54:47.545325  263219 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:54:47.545371  263219 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/client.key
	I0814 09:54:47.545397  263219 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/apiserver.key.cee25041
	I0814 09:54:47.545412  263219 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/proxy-client.key
	I0814 09:54:47.545509  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:54:47.545548  263219 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:54:47.545558  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:54:47.545583  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:54:47.545609  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:54:47.545633  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:54:47.545687  263219 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:54:47.546623  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:54:47.562014  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:54:47.577214  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:54:47.592255  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/newest-cni-20210814095308-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:54:47.607206  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:54:47.621996  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:54:47.637131  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:54:47.652557  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:54:47.667610  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:54:47.682608  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:54:47.697613  263219 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:54:47.712546  263219 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:54:47.723560  263219 ssh_runner.go:149] Run: openssl version
	I0814 09:54:47.728074  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:54:47.734595  263219 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:54:47.737470  263219 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:54:47.737504  263219 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:54:47.741923  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:54:47.748540  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:54:47.755676  263219 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:54:47.758734  263219 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:54:47.758778  263219 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:54:47.763208  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:54:47.769070  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:54:47.775553  263219 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:54:47.778275  263219 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:54:47.778311  263219 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:54:47.782612  263219 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:54:47.788618  263219 kubeadm.go:390] StartCluster: {Name:newest-cni-20210814095308-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210814095308-6746 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apise
rver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:54:47.788714  263219 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:54:47.788753  263219 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:54:47.810759  263219 cri.go:76] found id: "3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08"
	I0814 09:54:47.810776  263219 cri.go:76] found id: "5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf"
	I0814 09:54:47.810780  263219 cri.go:76] found id: "8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0"
	I0814 09:54:47.810784  263219 cri.go:76] found id: "7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581"
	I0814 09:54:47.810787  263219 cri.go:76] found id: "2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4"
	I0814 09:54:47.810791  263219 cri.go:76] found id: "97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce"
	I0814 09:54:47.810795  263219 cri.go:76] found id: ""
	I0814 09:54:47.810820  263219 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:54:47.823695  263219 cri.go:103] JSON = null
	W0814 09:54:47.823735  263219 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0814 09:54:47.823781  263219 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:54:47.829711  263219 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0814 09:54:47.829729  263219 kubeadm.go:600] restartCluster start
	I0814 09:54:47.829756  263219 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0814 09:54:47.835604  263219 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:47.836581  263219 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210814095308-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:54:47.836858  263219 kubeconfig.go:128] "newest-cni-20210814095308-6746" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig - will repair!
	I0814 09:54:47.837258  263219 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:54:47.839973  263219 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:54:47.846358  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:47.846402  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:47.857824  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.058152  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.058241  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.071639  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.258909  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.258985  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.272459  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.458734  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.458826  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.471961  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.658250  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.658339  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.671874  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:48.858176  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:48.858240  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:48.871258  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.058537  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.058613  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.072434  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.258696  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.258762  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.272007  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.458364  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.458436  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.470778  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.657968  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.658042  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.670775  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:49.858030  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:49.858102  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:49.870762  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:47.757543  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:50.258429  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:50.058443  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.058505  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.071297  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.258488  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.258548  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.271635  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.458924  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.458992  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.471709  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.657922  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.657992  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.671178  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.858417  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.858476  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.871478  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.871498  263219 api_server.go:164] Checking apiserver status ...
	I0814 09:54:50.871533  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0814 09:54:50.883639  263219 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.883663  263219 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0814 09:54:50.883669  263219 kubeadm.go:1032] stopping kube-system containers ...
	I0814 09:54:50.883681  263219 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:54:50.883723  263219 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:54:50.922065  263219 cri.go:76] found id: "3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08"
	I0814 09:54:50.922090  263219 cri.go:76] found id: "5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf"
	I0814 09:54:50.922097  263219 cri.go:76] found id: "8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0"
	I0814 09:54:50.922103  263219 cri.go:76] found id: "7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581"
	I0814 09:54:50.922108  263219 cri.go:76] found id: "2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4"
	I0814 09:54:50.922112  263219 cri.go:76] found id: "97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce"
	I0814 09:54:50.922115  263219 cri.go:76] found id: ""
	I0814 09:54:50.922120  263219 cri.go:221] Stopping containers: [3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08 5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf 8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0 7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581 2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4 97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce]
	I0814 09:54:50.922168  263219 ssh_runner.go:149] Run: which crictl
	I0814 09:54:50.924855  263219 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08 5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf 8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0 7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581 2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4 97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce
	I0814 09:54:50.946244  263219 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0814 09:54:50.955189  263219 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:54:50.961482  263219 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 14 09:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 14 09:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Aug 14 09:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 14 09:53 /etc/kubernetes/scheduler.conf
	
	I0814 09:54:50.961534  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 09:54:50.967471  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 09:54:50.973384  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 09:54:50.979163  263219 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.979211  263219 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:54:50.984910  263219 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 09:54:50.990763  263219 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:54:50.990811  263219 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:54:50.996329  263219 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:54:51.002220  263219 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0814 09:54:51.002246  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.043434  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.653259  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.772704  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.822706  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:54:51.873247  263219 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:54:51.873311  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:52.410014  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:52.909983  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:53.410543  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:53.910003  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:54.409809  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:54.910029  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:52.757954  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:55.257696  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:55.409812  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:55.909793  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:56.410716  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:56.910783  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:57.409799  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:57.910366  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:58.409907  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:54:58.425524  263219 api_server.go:70] duration metric: took 6.552278205s to wait for apiserver process to appear ...
	I0814 09:54:58.425549  263219 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:54:58.425558  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:54:57.258496  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:54:59.258629  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:01.483437  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 09:55:01.483463  263219 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 09:55:01.984132  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:01.988491  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:55:01.988512  263219 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:55:02.484027  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:02.488941  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0814 09:55:02.488965  263219 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0814 09:55:02.984527  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:02.989064  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:55:02.994840  263219 api_server.go:139] control plane version: v1.22.0-rc.0
	I0814 09:55:02.994860  263219 api_server.go:129] duration metric: took 4.569305048s to wait for apiserver health ...
	I0814 09:55:02.994869  263219 cni.go:93] Creating CNI manager for ""
	I0814 09:55:02.994875  263219 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:55:02.996662  263219 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:55:02.996709  263219 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:55:03.000415  263219 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0814 09:55:03.000439  263219 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:55:03.013912  263219 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:55:03.183787  263219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:55:03.194795  263219 system_pods.go:59] 9 kube-system pods found
	I0814 09:55:03.194829  263219 system_pods.go:61] "coredns-78fcd69978-gz25q" [20b6b7da-c5c2-4631-8357-1a6ffaba0b3f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.194836  263219 system_pods.go:61] "etcd-newest-cni-20210814095308-6746" [1d958fd9-c1c4-4211-8ceb-d9f0c8d19ede] Running
	I0814 09:55:03.194842  263219 system_pods.go:61] "kindnet-qmdzb" [515a8aac-189c-47a3-9a37-3210ef5cfd44] Running
	I0814 09:55:03.194846  263219 system_pods.go:61] "kube-apiserver-newest-cni-20210814095308-6746" [0810d2fb-3ed7-43f1-8978-ab9fbf53f8f5] Running
	I0814 09:55:03.194850  263219 system_pods.go:61] "kube-controller-manager-newest-cni-20210814095308-6746" [ce6fab88-2ecd-4927-9a7d-a74284dadee2] Running
	I0814 09:55:03.194854  263219 system_pods.go:61] "kube-proxy-5jhgl" [1c1f42ab-2957-447b-9911-2959da7ffe6d] Running
	I0814 09:55:03.194859  263219 system_pods.go:61] "kube-scheduler-newest-cni-20210814095308-6746" [4bd13c2a-5f90-4e9a-897f-5d58ee4467e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:55:03.194865  263219 system_pods.go:61] "metrics-server-7c784ccb57-lq9bb" [4173e8b0-e67c-4d55-aa26-732c3e6ff081] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.194874  263219 system_pods.go:61] "storage-provisioner" [9a8a638e-e802-41aa-9fca-d4ed41608c70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.194879  263219 system_pods.go:74] duration metric: took 11.070687ms to wait for pod list to return data ...
	I0814 09:55:03.194889  263219 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:55:03.197888  263219 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:55:03.197921  263219 node_conditions.go:123] node cpu capacity is 8
	I0814 09:55:03.197936  263219 node_conditions.go:105] duration metric: took 3.042815ms to run NodePressure ...
	I0814 09:55:03.197951  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:55:03.342948  263219 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:55:03.356692  263219 ops.go:34] apiserver oom_adj: -16
	I0814 09:55:03.356709  263219 kubeadm.go:604] restartCluster took 15.526974986s
	I0814 09:55:03.356716  263219 kubeadm.go:392] StartCluster complete in 15.568106207s
	I0814 09:55:03.356731  263219 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:55:03.356837  263219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:55:03.357667  263219 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:55:03.361521  263219 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210814095308-6746" rescaled to 1
	I0814 09:55:03.361574  263219 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0814 09:55:03.361595  263219 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:55:03.363379  263219 out.go:177] * Verifying Kubernetes components...
	I0814 09:55:03.363434  263219 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:55:03.361664  263219 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0814 09:55:03.363506  263219 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.363528  263219 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.363538  263219 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:55:03.363572  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.363577  263219 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.361786  263219 config.go:177] Loaded profile config "newest-cni-20210814095308-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0814 09:55:03.363581  263219 addons.go:59] Setting dashboard=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.363592  263219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210814095308-6746"
	I0814 09:55:03.363604  263219 addons.go:135] Setting addon dashboard=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.363623  263219 addons.go:147] addon dashboard should already be in state true
	I0814 09:55:03.363652  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.363606  263219 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210814095308-6746"
	I0814 09:55:03.363732  263219 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.363746  263219 addons.go:147] addon metrics-server should already be in state true
	I0814 09:55:03.363781  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.363914  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.364095  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.364115  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.364234  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.416970  263219 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0814 09:55:03.418696  263219 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0814 09:55:03.418775  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0814 09:55:03.418788  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0814 09:55:03.418847  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.420770  263219 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0814 09:55:03.422926  263219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:55:03.423077  263219 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:55:03.423089  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:55:03.423141  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.420854  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 09:55:03.423480  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0814 09:55:03.423549  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.428439  263219 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210814095308-6746"
	W0814 09:55:03.428461  263219 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:55:03.428490  263219 host.go:66] Checking if "newest-cni-20210814095308-6746" exists ...
	I0814 09:55:03.428902  263219 cli_runner.go:115] Run: docker container inspect newest-cni-20210814095308-6746 --format={{.State.Status}}
	I0814 09:55:03.443739  263219 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0814 09:55:03.449440  263219 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:55:03.449770  263219 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:55:03.469347  263219 api_server.go:70] duration metric: took 107.743895ms to wait for apiserver process to appear ...
	I0814 09:55:03.469372  263219 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:55:03.469383  263219 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0814 09:55:03.476321  263219 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0814 09:55:03.477254  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.477304  263219 api_server.go:139] control plane version: v1.22.0-rc.0
	I0814 09:55:03.477323  263219 api_server.go:129] duration metric: took 7.944897ms to wait for apiserver health ...
	I0814 09:55:03.477335  263219 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:55:03.478668  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.483131  263219 system_pods.go:59] 9 kube-system pods found
	I0814 09:55:03.483167  263219 system_pods.go:61] "coredns-78fcd69978-gz25q" [20b6b7da-c5c2-4631-8357-1a6ffaba0b3f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.483178  263219 system_pods.go:61] "etcd-newest-cni-20210814095308-6746" [1d958fd9-c1c4-4211-8ceb-d9f0c8d19ede] Running
	I0814 09:55:03.483194  263219 system_pods.go:61] "kindnet-qmdzb" [515a8aac-189c-47a3-9a37-3210ef5cfd44] Running
	I0814 09:55:03.483201  263219 system_pods.go:61] "kube-apiserver-newest-cni-20210814095308-6746" [0810d2fb-3ed7-43f1-8978-ab9fbf53f8f5] Running
	I0814 09:55:03.483211  263219 system_pods.go:61] "kube-controller-manager-newest-cni-20210814095308-6746" [ce6fab88-2ecd-4927-9a7d-a74284dadee2] Running
	I0814 09:55:03.483220  263219 system_pods.go:61] "kube-proxy-5jhgl" [1c1f42ab-2957-447b-9911-2959da7ffe6d] Running
	I0814 09:55:03.483232  263219 system_pods.go:61] "kube-scheduler-newest-cni-20210814095308-6746" [4bd13c2a-5f90-4e9a-897f-5d58ee4467e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 09:55:03.483243  263219 system_pods.go:61] "metrics-server-7c784ccb57-lq9bb" [4173e8b0-e67c-4d55-aa26-732c3e6ff081] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.483255  263219 system_pods.go:61] "storage-provisioner" [9a8a638e-e802-41aa-9fca-d4ed41608c70] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0814 09:55:03.483265  263219 system_pods.go:74] duration metric: took 5.921097ms to wait for pod list to return data ...
	I0814 09:55:03.483278  263219 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:55:03.485639  263219 default_sa.go:45] found service account: "default"
	I0814 09:55:03.485662  263219 default_sa.go:55] duration metric: took 2.376939ms for default service account to be created ...
	I0814 09:55:03.485672  263219 kubeadm.go:547] duration metric: took 124.073585ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0814 09:55:03.485696  263219 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:55:03.486516  263219 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:55:03.486534  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:55:03.486615  263219 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210814095308-6746
	I0814 09:55:03.487650  263219 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:55:03.487674  263219 node_conditions.go:123] node cpu capacity is 8
	I0814 09:55:03.487690  263219 node_conditions.go:105] duration metric: took 1.988394ms to run NodePressure ...
	I0814 09:55:03.487701  263219 start.go:231] waiting for startup goroutines ...
	I0814 09:55:03.488462  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.529983  263219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/newest-cni-20210814095308-6746/id_rsa Username:docker}
	I0814 09:55:03.577421  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:55:03.577541  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0814 09:55:03.577564  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0814 09:55:03.581019  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 09:55:03.581036  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0814 09:55:03.590270  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0814 09:55:03.590287  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0814 09:55:03.593085  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 09:55:03.593102  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0814 09:55:03.602634  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0814 09:55:03.602650  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0814 09:55:03.605498  263219 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:55:03.605514  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0814 09:55:03.615981  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0814 09:55:03.615999  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0814 09:55:03.618407  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:55:03.629586  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0814 09:55:03.629605  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0814 09:55:03.630364  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:55:03.642536  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0814 09:55:03.642553  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0814 09:55:03.718968  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0814 09:55:03.718994  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0814 09:55:03.733810  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0814 09:55:03.733838  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0814 09:55:03.815012  263219 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:55:03.815038  263219 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0814 09:55:03.832642  263219 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:55:04.108364  263219 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210814095308-6746"
	I0814 09:55:04.243099  263219 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0814 09:55:04.243126  263219 addons.go:344] enableAddons completed in 881.466189ms
	I0814 09:55:04.289541  263219 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0814 09:55:04.291219  263219 out.go:177] 
	W0814 09:55:04.291353  263219 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0814 09:55:04.292703  263219 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0814 09:55:04.294148  263219 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210814095308-6746" cluster and "default" namespace by default
	I0814 09:55:01.757880  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:03.758948  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:06.258001  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:08.757342  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:11.258507  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:13.260224  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	I0814 09:55:15.757500  250455 pod_ready.go:102] pod "metrics-server-7c784ccb57-fb564" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	ef09908540d7f       6de166512aa22       15 seconds ago       Running             kindnet-cni               1                   9c614b223bfca
	5bacdc27c2e81       ea6b13ed84e03       16 seconds ago       Running             kube-proxy                1                   3f81a33e54991
	d855c2c39957c       cf9cba6c3e4a8       20 seconds ago       Running             kube-controller-manager   1                   1801c8f895d52
	4fe28eb2dc10e       b2462aa94d403       20 seconds ago       Running             kube-apiserver            1                   9bcaab314a378
	c8f7521d5e47e       7da2efaa5b480       20 seconds ago       Running             kube-scheduler            1                   be7b27c626e04
	e8a692b02eb5f       0048118155842       20 seconds ago       Running             etcd                      1                   17cf182148edb
	3cf010f2c16cd       6de166512aa22       About a minute ago   Exited              kindnet-cni               0                   83bcd0e47a241
	5e4d53be68daa       ea6b13ed84e03       About a minute ago   Exited              kube-proxy                0                   2ad25b7db0173
	8934417c26f11       0048118155842       About a minute ago   Exited              etcd                      0                   ab8031108aef3
	7812503546803       cf9cba6c3e4a8       About a minute ago   Exited              kube-controller-manager   0                   7c26f97724898
	2f5be69ea0aa5       7da2efaa5b480       About a minute ago   Exited              kube-scheduler            0                   3fe6609137187
	97422526a92f0       b2462aa94d403       About a minute ago   Exited              kube-apiserver            0                   57e14a4b09668
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:54:31 UTC, end at Sat 2021-08-14 09:55:18 UTC. --
	Aug 14 09:54:58 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:54:58.019264749Z" level=info msg="StartContainer for \"d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83\" returns successfully"
	Aug 14 09:54:58 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:54:58.019276941Z" level=info msg="StartContainer for \"4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea\" returns successfully"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:01.524615635Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209405723Z" level=info msg="StopPodSandbox for \"2ad25b7db0173b319378331996118dff8dea36520fe0fb9199772a879e112a92\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209510101Z" level=info msg="Container to stop \"5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209608040Z" level=info msg="TearDown network for sandbox \"2ad25b7db0173b319378331996118dff8dea36520fe0fb9199772a879e112a92\" successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.209620641Z" level=info msg="StopPodSandbox for \"2ad25b7db0173b319378331996118dff8dea36520fe0fb9199772a879e112a92\" returns successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210007541Z" level=info msg="StopPodSandbox for \"83bcd0e47a241079722a0d2db557637f5ab94b693118d620ddd690781e1362d6\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210056302Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-5jhgl,Uid:1c1f42ab-2957-447b-9911-2959da7ffe6d,Namespace:kube-system,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210072364Z" level=info msg="Container to stop \"3cf010f2c16cdf8f704d0267d3122a41c95456d050ee34276b03767797ee9e08\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210246628Z" level=info msg="TearDown network for sandbox \"83bcd0e47a241079722a0d2db557637f5ab94b693118d620ddd690781e1362d6\" successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210261376Z" level=info msg="StopPodSandbox for \"83bcd0e47a241079722a0d2db557637f5ab94b693118d620ddd690781e1362d6\" returns successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.210686818Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-qmdzb,Uid:515a8aac-189c-47a3-9a37-3210ef5cfd44,Namespace:kube-system,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.232666206Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47 pid=1193
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.233264169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e pid=1195
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.378762535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jhgl,Uid:1c1f42ab-2957-447b-9911-2959da7ffe6d,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.381197333Z" level=info msg="CreateContainer within sandbox \"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.438777842Z" level=info msg="CreateContainer within sandbox \"3f81a33e549918cc7a83039d00529192d03c9de524610d3e08c334a59299aa9e\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.439174939Z" level=info msg="StartContainer for \"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.603698417Z" level=info msg="StartContainer for \"5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e\" returns successfully"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.704590004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-qmdzb,Uid:515a8aac-189c-47a3-9a37-3210ef5cfd44,Namespace:kube-system,Attempt:1,} returns sandbox id \"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.710245343Z" level=info msg="CreateContainer within sandbox \"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.775137530Z" level=info msg="CreateContainer within sandbox \"9c614b223bfcadf7f34d8f501c5f13ff048752d169ecc1d529e19c3eed383c47\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92\""
	Aug 14 09:55:02 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:02.775581594Z" level=info msg="StartContainer for \"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92\""
	Aug 14 09:55:03 newest-cni-20210814095308-6746 containerd[336]: time="2021-08-14T09:55:03.014567862Z" level=info msg="StartContainer for \"ef09908540d7f057879d01381ff919bb13c8a851bc47e4682cc5c14f36e0ae92\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.035630] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth0b3713f0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa 08 7c e8 24 aa 08 06        ........|.$...
	[  +0.851133] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000003] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +2.011842] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +4.227682] net_ratelimit: 2 callbacks suppressed
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000001] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000000] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +8.187413] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000025] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[Aug14 09:53] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:54] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [8934417c26f11e073903b06a13d40bfed90968f34b88550c890e63e2753ec2d0] <==
	* {"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.800980327s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.795983314s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[830146092] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:85; }","duration":"1.801055769s","start":"2021-08-14T09:53:50.512Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[830146092] 'agreement among raft nodes before linearized reading'  (duration: 1.798057939s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.606273997s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-20210814095308-6746\" ","response":"range_response_count:1 size:6222"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[286981371] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:85; }","duration":"1.796010171s","start":"2021-08-14T09:53:50.517Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[286981371] 'agreement among raft nodes before linearized reading'  (duration: 1.792999205s)"],"step_count":1}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[1329788645] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-20210814095308-6746; range_end:; response_count:1; response_revision:85; }","duration":"1.60630003s","start":"2021-08-14T09:53:50.706Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[1329788645] 'agreement among raft nodes before linearized reading'  (duration: 1.603220066s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.517Z","time spent":"1.796076056s","remote":"127.0.0.1:33498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.706Z","time spent":"1.60635546s","remote":"127.0.0.1:33352","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":6245,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-20210814095308-6746\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.512Z","time spent":"1.80110827s","remote":"127.0.0.1:33346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":0,"response size":27,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"857.174879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[1164043677] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:0; response_revision:85; }","duration":"857.705329ms","start":"2021-08-14T09:53:51.455Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[1164043677] 'agreement among raft nodes before linearized reading'  (duration: 854.229493ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:51.455Z","time spent":"857.752348ms","remote":"127.0.0.1:33442","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":27,"request content":"key:\"/registry/clusterrolebindings/system:discovery\" "}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.725474595s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2021-08-14T09:53:52.313Z","caller":"traceutil/trace.go:171","msg":"trace[1784313380] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:85; }","duration":"1.726138835s","start":"2021-08-14T09:53:50.587Z","end":"2021-08-14T09:53:52.313Z","steps":["trace[1784313380] 'agreement among raft nodes before linearized reading'  (duration: 1.722545516s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:53:52.313Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:53:50.587Z","time spent":"1.726207349s","remote":"127.0.0.1:33498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"868.126105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-14T09:54:02.888Z","caller":"traceutil/trace.go:171","msg":"trace[105308563] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:355; }","duration":"868.242058ms","start":"2021-08-14T09:54:02.020Z","end":"2021-08-14T09:54:02.888Z","steps":["trace[105308563] 'range keys from in-memory index tree'  (duration: 868.046874ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:54:02.019Z","time spent":"868.306693ms","remote":"127.0.0.1:33354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":28,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"918.257504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210814095308-6746\" ","response":"range_response_count:1 size:4247"}
	{"level":"info","ts":"2021-08-14T09:54:02.888Z","caller":"traceutil/trace.go:171","msg":"trace[759917799] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-newest-cni-20210814095308-6746; range_end:; response_count:1; response_revision:355; }","duration":"918.304691ms","start":"2021-08-14T09:54:01.970Z","end":"2021-08-14T09:54:02.888Z","steps":["trace[759917799] 'range keys from in-memory index tree'  (duration: 918.123987ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:54:02.888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-14T09:54:01.970Z","time spent":"918.356228ms","remote":"127.0.0.1:33352","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":4270,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20210814095308-6746\" "}
	{"level":"warn","ts":"2021-08-14T09:54:07.018Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.689781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4667"}
	{"level":"info","ts":"2021-08-14T09:54:07.018Z","caller":"traceutil/trace.go:171","msg":"trace[2011038256] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:437; }","duration":"100.812008ms","start":"2021-08-14T09:54:06.917Z","end":"2021-08-14T09:54:07.018Z","steps":["trace[2011038256] 'agreement among raft nodes before linearized reading'  (duration: 100.658129ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-14T09:54:07.018Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.282663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-78fcd69978\" ","response":"range_response_count:1 size:3633"}
	{"level":"info","ts":"2021-08-14T09:54:07.018Z","caller":"traceutil/trace.go:171","msg":"trace[1959183368] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-78fcd69978; range_end:; response_count:1; response_revision:437; }","duration":"100.340337ms","start":"2021-08-14T09:54:06.918Z","end":"2021-08-14T09:54:07.018Z","steps":["trace[1959183368] 'agreement among raft nodes before linearized reading'  (duration: 100.259805ms)"],"step_count":1}
	
	* 
	* ==> etcd [e8a692b02eb5f09aac20cc4fd324bffa93501542755dcf9f11f2be08689447da] <==
	* {"level":"info","ts":"2021-08-14T09:54:57.966Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-14T09:54:57.967Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.0","cluster-id":"3a56e4ca95e2355c","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-14T09:54:57.967Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-14T09:54:57.968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2021-08-14T09:54:57.968Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2021-08-14T09:54:57.968Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-08-14T09:54:57.970Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2021-08-14T09:54:58.301Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20210814095308-6746 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-14T09:54:58.301Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:54:58.301Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-08-14T09:54:58.304Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  09:55:28 up  1:38,  0 users,  load average: 1.72, 1.57, 1.67
	Linux newest-cni-20210814095308-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [4fe28eb2dc10e7493efa28b67f9777606ecede593018587b69dfb2a58fc683ea] <==
	* I0814 09:55:01.466564       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0814 09:55:01.466619       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0814 09:55:01.514545       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 09:55:01.515262       1 shared_informer.go:247] Caches are synced for node_authorizer 
	E0814 09:55:01.514724       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0814 09:55:01.600893       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0814 09:55:01.601200       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0814 09:55:01.601826       1 cache.go:39] Caches are synced for autoregister controller
	I0814 09:55:01.602087       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0814 09:55:01.600906       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0814 09:55:01.602622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 09:55:01.630263       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0814 09:55:02.465126       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0814 09:55:02.465286       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0814 09:55:02.469361       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0814 09:55:03.178490       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 09:55:03.271689       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 09:55:03.285155       1 controller.go:611] quota admission added evaluator for: deployments.apps
	W0814 09:55:03.317842       1 handler_proxy.go:104] no RequestInfo found in the context
	E0814 09:55:03.317922       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0814 09:55:03.317933       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 09:55:03.330565       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 09:55:03.335501       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 09:55:04.156337       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-apiserver [97422526a92f0806f964ea6f56d8f453cb53ee77ce00c08aa8ae9b58cb9d83ce] <==
	* I0814 09:53:52.314358       1 trace.go:205] Trace[1226201223]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-newest-cni-20210814095308-6746,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:56041f4f-ac38-49f6-a1e2-d7ea76edae33,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:53:50.706) (total time: 1608ms):
	Trace[1226201223]: ---"About to write a response" 1607ms (09:53:52.313)
	Trace[1226201223]: [1.608028787s] [1.608028787s] END
	I0814 09:53:52.314484       1 trace.go:205] Trace[1459702612]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:55520a25-3343-4135-9bed-f5cf7615be97,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:53:51.455) (total time: 859ms):
	Trace[1459702612]: [859.120585ms] [859.120585ms] END
	I0814 09:53:52.314488       1 trace.go:205] Trace[1387414596]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:433b4818-a07d-4eb2-857f-986f1288d8e6,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (14-Aug-2021 09:53:50.511) (total time: 1802ms):
	Trace[1387414596]: [1.802769395s] [1.802769395s] END
	I0814 09:53:52.314998       1 trace.go:205] Trace[845452184]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:104569a6-661a-4e2f-8664-faa5870bff22,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:53:50.460) (total time: 1854ms):
	Trace[845452184]: [1.85434696s] [1.85434696s] END
	I0814 09:53:52.810044       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 09:53:52.839147       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0814 09:53:52.932537       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0814 09:53:52.933399       1 controller.go:611] quota admission added evaluator for: endpoints
	I0814 09:53:52.936454       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 09:53:53.125739       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 09:53:54.305778       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0814 09:53:54.333804       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 09:53:59.402762       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 09:54:02.889388       1 trace.go:205] Trace[940905341]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-newest-cni-20210814095308-6746,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:d230ffa0-e61c-4074-9439-daa373a87415,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:54:01.969) (total time: 919ms):
	Trace[940905341]: ---"About to write a response" 919ms (09:54:02.889)
	Trace[940905341]: [919.730292ms] [919.730292ms] END
	I0814 09:54:02.890264       1 trace.go:205] Trace[1167629556]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:2e7b84b4-e9c9-4d72-8e2a-bede6d4f1a99,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:54:02.019) (total time: 870ms):
	Trace[1167629556]: [870.652983ms] [870.652983ms] END
	I0814 09:54:06.735555       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0814 09:54:06.811363       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [7812503546803559a63c7d68dfd9df9990b8041ded74f1db6694ba2fe08ed581] <==
	* I0814 09:54:06.801087       1 disruption.go:371] Sending events to api server.
	I0814 09:54:06.801067       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0814 09:54:06.804418       1 range_allocator.go:373] Set node newest-cni-20210814095308-6746 PodCIDR to [192.168.0.0/24]
	I0814 09:54:06.815212       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-xsdxs"
	I0814 09:54:06.821565       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5jhgl"
	I0814 09:54:06.821868       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-gz25q"
	I0814 09:54:06.821892       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qmdzb"
	I0814 09:54:06.829111       1 event.go:291] "Event occurred" object="kube-dns" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I0814 09:54:06.901007       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0814 09:54:06.959544       1 shared_informer.go:247] Caches are synced for cronjob 
	I0814 09:54:06.964966       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:54:06.976237       1 shared_informer.go:247] Caches are synced for job 
	I0814 09:54:06.976238       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0814 09:54:06.976261       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0814 09:54:06.976296       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0814 09:54:06.985497       1 shared_informer.go:247] Caches are synced for resource quota 
	I0814 09:54:07.123527       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0814 09:54:07.130744       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-xsdxs"
	I0814 09:54:07.365769       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:54:07.365791       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 09:54:07.411379       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0814 09:54:08.869610       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0814 09:54:08.877288       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:54:08.903043       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:54:08.910364       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-lq9bb"
	
	* 
	* ==> kube-controller-manager [d855c2c39957cc34e65dc76062ad7765e20eef0da31eb38e4127b5176b22ba83] <==
	* I0814 09:54:59.403511       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0814 09:55:04.711533       1 request.go:665] Waited for 1.00834481s due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/apis/discovery.k8s.io/v1beta1?timeout=32s
	E0814 09:55:05.112296       1 controllermanager.go:467] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0814 09:55:05.113192       1 shared_informer.go:240] Waiting for caches to sync for tokens
	E0814 09:55:05.131483       1 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0814 09:55:05.131595       1 controllermanager.go:577] Started "namespace"
	I0814 09:55:05.131678       1 namespace_controller.go:200] Starting namespace controller
	I0814 09:55:05.131700       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0814 09:55:05.138145       1 controllermanager.go:577] Started "horizontalpodautoscaling"
	I0814 09:55:05.138247       1 horizontal.go:169] Starting HPA controller
	I0814 09:55:05.138267       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0814 09:55:05.140391       1 controllermanager.go:577] Started "cronjob"
	I0814 09:55:05.140571       1 cronjob_controllerv2.go:125] "Starting cronjob controller v2"
	I0814 09:55:05.140588       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0814 09:55:05.143454       1 controllermanager.go:577] Started "clusterrole-aggregation"
	I0814 09:55:05.143606       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
	I0814 09:55:05.143618       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0814 09:55:05.145522       1 controllermanager.go:577] Started "attachdetach"
	I0814 09:55:05.145636       1 attach_detach_controller.go:328] Starting attach detach controller
	I0814 09:55:05.145644       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0814 09:55:05.147453       1 controllermanager.go:577] Started "job"
	I0814 09:55:05.147473       1 job_controller.go:172] Starting job controller
	I0814 09:55:05.147482       1 shared_informer.go:240] Waiting for caches to sync for job
	I0814 09:55:05.149431       1 node_ipam_controller.go:91] Sending events to api server.
	I0814 09:55:05.213720       1 shared_informer.go:247] Caches are synced for tokens 
	
	* 
	* ==> kube-proxy [5bacdc27c2e818d7faa90777585a48578098b899b0643c8dc1eefc87233a283e] <==
	* I0814 09:55:02.643771       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:55:02.643819       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:55:02.643838       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0814 09:55:02.720975       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:55:02.721008       1 server_others.go:212] Using iptables Proxier.
	I0814 09:55:02.721021       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:55:02.721035       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:55:02.721390       1 server.go:649] Version: v1.22.0-rc.0
	I0814 09:55:02.722068       1 config.go:224] Starting endpoint slice config controller
	I0814 09:55:02.722159       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0814 09:55:02.722198       1 config.go:315] Starting service config controller
	I0814 09:55:02.722202       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0814 09:55:02.725229       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210814095308-6746.169b23a9defa06c9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03e029dab07e501, ext:151845569, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210814095308-6746", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"newest-cni-20210814095308-6746", UID:"newest-cni-20210814095308-6746", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210814095308-6746.169b23a9defa06c9" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0814 09:55:02.822378       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:55:02.822383       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [5e4d53be68daac0b3c1f434da414b34a383e2b3a78639fa4263e0f85527f24bf] <==
	* I0814 09:54:08.129166       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0814 09:54:08.129235       1 server_others.go:140] Detected node IP 192.168.58.2
	W0814 09:54:08.129260       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0814 09:54:08.210544       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:54:08.210599       1 server_others.go:212] Using iptables Proxier.
	I0814 09:54:08.210617       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:54:08.210634       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:54:08.211381       1 server.go:649] Version: v1.22.0-rc.0
	I0814 09:54:08.212272       1 config.go:315] Starting service config controller
	I0814 09:54:08.212480       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0814 09:54:08.212592       1 config.go:224] Starting endpoint slice config controller
	I0814 09:54:08.212605       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0814 09:54:08.217120       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210814095308-6746.169b239d2df56c32", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03e02900ca9e336, ext:206518773, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210814095308-6746", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"newest-cni-20210814095308-6746", UID:"newest-cni-20210814095308-6746", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210814095308-6746.169b239d2df56c32" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0814 09:54:08.312890       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0814 09:54:08.312965       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2f5be69ea0aa5048043849dd40ea3a269ace093ce65f1be4744ac466c85de7c4] <==
	* E0814 09:53:46.563491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:46.632122       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 09:53:46.673541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:53:46.713921       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:53:47.940476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:53:48.118504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:53:48.130646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:53:48.172018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:48.292783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:53:48.644211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:48.976570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:53:49.077462       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:53:49.111558       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:49.117545       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:53:49.149747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:49.275716       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:53:49.293783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:53:49.521838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 09:53:49.554067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:53:51.948233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:53:51.970406       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:53:51.995506       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:53:52.135637       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:53:52.503115       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0814 09:54:00.918315       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [c8f7521d5e47e8d06a28b207f1dd064f7fb2e57e3740c024ed800d2cac545adc] <==
	* W0814 09:54:58.112568       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0814 09:54:58.614473       1 serving.go:347] Generated self-signed cert in-memory
	W0814 09:55:01.484562       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 09:55:01.484598       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 09:55:01.484610       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 09:55:01.484618       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 09:55:01.527873       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0814 09:55:01.527994       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 09:55:01.528015       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:55:01.528028       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0814 09:55:01.606685       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0814 09:55:01.606952       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0814 09:55:01.628136       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:54:31 UTC, end at Sat 2021-08-14 09:55:29 UTC. --
	Aug 14 09:55:00 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:00.929239     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.030140     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.130885     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.231433     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.331980     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: E0814 09:55:01.432837     712 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210814095308-6746\" not found"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.523020     712 kubelet_node_status.go:109] "Node was previously registered" node="newest-cni-20210814095308-6746"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.523126     712 kubelet_node_status.go:74] "Successfully registered node" node="newest-cni-20210814095308-6746"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.524218     712 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.524924     712 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.902050     712 apiserver.go:52] "Watching apiserver"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.906955     712 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 09:55:01 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:01.907080     712 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011536     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/515a8aac-189c-47a3-9a37-3210ef5cfd44-xtables-lock\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011580     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c1f42ab-2957-447b-9911-2959da7ffe6d-xtables-lock\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011601     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c1f42ab-2957-447b-9911-2959da7ffe6d-lib-modules\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011657     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/515a8aac-189c-47a3-9a37-3210ef5cfd44-lib-modules\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011685     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/515a8aac-189c-47a3-9a37-3210ef5cfd44-cni-cfg\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011773     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfq5j\" (UniqueName: \"kubernetes.io/projected/515a8aac-189c-47a3-9a37-3210ef5cfd44-kube-api-access-vfq5j\") pod \"kindnet-qmdzb\" (UID: \"515a8aac-189c-47a3-9a37-3210ef5cfd44\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011822     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c1f42ab-2957-447b-9911-2959da7ffe6d-kube-proxy\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011853     712 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfsp6\" (UniqueName: \"kubernetes.io/projected/1c1f42ab-2957-447b-9911-2959da7ffe6d-kube-api-access-bfsp6\") pod \"kube-proxy-5jhgl\" (UID: \"1c1f42ab-2957-447b-9911-2959da7ffe6d\") "
	Aug 14 09:55:02 newest-cni-20210814095308-6746 kubelet[712]: I0814 09:55:02.011874     712 reconciler.go:157] "Reconciler: start to sync state"
	Aug 14 09:55:05 newest-cni-20210814095308-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:55:05 newest-cni-20210814095308-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:55:05 newest-cni-20210814095308-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:55:28.740946  268013 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (24.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210814095040-6746 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210814095040-6746 --alsologtostderr -v=1: exit status 80 (1.879356215s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210814095040-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:57:52.288418  293459 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:57:52.288512  293459 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:52.288522  293459 out.go:311] Setting ErrFile to fd 2...
	I0814 09:57:52.288526  293459 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:52.288630  293459 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:57:52.288845  293459 out.go:305] Setting JSON to false
	I0814 09:57:52.288865  293459 mustload.go:65] Loading cluster: default-k8s-different-port-20210814095040-6746
	I0814 09:57:52.289188  293459 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:57:52.289615  293459 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:52.337108  293459 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:52.338071  293459 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210814095040-6746 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0814 09:57:52.340499  293459 out.go:177] * Pausing node default-k8s-different-port-20210814095040-6746 ... 
	I0814 09:57:52.340533  293459 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:52.340870  293459 ssh_runner.go:149] Run: systemctl --version
	I0814 09:57:52.340913  293459 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:52.386146  293459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:52.480368  293459 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:52.489104  293459 pause.go:50] kubelet running: true
	I0814 09:57:52.489157  293459 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:57:52.599887  293459 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:57:52.599963  293459 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:57:52.669668  293459 cri.go:76] found id: "82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557"
	I0814 09:57:52.669695  293459 cri.go:76] found id: "0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918"
	I0814 09:57:52.669700  293459 cri.go:76] found id: "19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223"
	I0814 09:57:52.669705  293459 cri.go:76] found id: "f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d"
	I0814 09:57:52.669710  293459 cri.go:76] found id: "a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3"
	I0814 09:57:52.669717  293459 cri.go:76] found id: "ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371"
	I0814 09:57:52.669722  293459 cri.go:76] found id: "e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06"
	I0814 09:57:52.669727  293459 cri.go:76] found id: "d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde"
	I0814 09:57:52.669733  293459 cri.go:76] found id: "6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	I0814 09:57:52.669742  293459 cri.go:76] found id: "9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526"
	I0814 09:57:52.669759  293459 cri.go:76] found id: ""
	I0814 09:57:52.669807  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:57:52.710475  293459 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918","pid":5727,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918/rootfs","created":"2021-08-14T09:57:29.001157326Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223","pid":5420,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223","rootfs":"/run/containerd/io.containerd.runtime.v2
.task/k8s.io/19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223/rootfs","created":"2021-08-14T09:57:27.93721129Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","pid":5298,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e/rootfs","created":"2021-08-14T09:57:27.429031723Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet
-9zklk_6f6c319c-8cf6-45c5-bba1-3a5999ff9a0e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","pid":4569,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0/rootfs","created":"2021-08-14T09:57:06.26100744Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210814095040-6746_50e2dfa2b1eaaff4e1c16b1d209880b0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","pid":6048,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s
.io/54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801/rootfs","created":"2021-08-14T09:57:31.044969338Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-54btm_dfc36e40-fb64-41d8-a005-ab76555690d0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","pid":5290,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898/rootfs","created":"2021-08-14T09:57:27.29701866Z","annot
ations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-klrbg_18dba609-fb6b-4895-aca7-2d94942571f6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","pid":4567,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804/rootfs","created":"2021-08-14T09:57:06.261015047Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210814095040-67
46_4ff167100744a55cb66874f5f0f5a8f3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557","pid":5947,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557/rootfs","created":"2021-08-14T09:57:30.817022661Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","pid":4588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414/rootfs","created":"2021-08-14T09:57:06.261062989Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210814095040-6746_6140894add4347409ea837150aff8296"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526","pid":6178,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526/rootfs","created":"2021-08-14T09:57:31.800918203Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"con
tainer","io.kubernetes.cri.sandbox-id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","pid":5664,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836/rootfs","created":"2021-08-14T09:57:28.527905284Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-zjjkn_50cc162c-7c79-4bcb-a514-12fbea928898"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3","pid":4706,"status":"running","bundle":"/run/containerd/io.c
ontainerd.runtime.v2.task/k8s.io/a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3/rootfs","created":"2021-08-14T09:57:06.548899793Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371","pid":4716,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371/rootfs","created":"2021-08-14T09:57:06.568991611Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.conta
iner-type":"container","io.kubernetes.cri.sandbox-id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde","pid":4715,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde/rootfs","created":"2021-08-14T09:57:06.568993452Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4","pid":6117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6fcb757f64e9d2cce8be
66b8e6a95c66504da6dede6dd14334328a7f2c577b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4/rootfs","created":"2021-08-14T09:57:31.333152576Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-hjr7d_2936fbc6-dc7a-429f-b4ae-fa739e5e2c42"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06","pid":4699,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06/rootfs","created":"2021-08-14T09:57:06.568985137Z","annotations":{"io.kubernetes.cri.co
ntainer-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","pid":4568,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df/rootfs","created":"2021-08-14T09:57:06.260986123Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210814095040-6746_929349d6f8b1131233aab1522615c193"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef0195e898ea1e50d58ed3fed8747f7ba
27336b3c8dd4a32d0c0e074dca38229","pid":5915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229/rootfs","created":"2021-08-14T09:57:30.568975443Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c822637b-9c4e-48fa-ba25-77aeb1c4f4ad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d","pid":5330,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0f81c09afef0fe8ea79cd9491a2b04
7484b1e6915ba7513902429bb9142f00d/rootfs","created":"2021-08-14T09:57:27.453031347Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","pid":5874,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82/rootfs","created":"2021-08-14T09:57:30.332983101Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-2ms26_0f0c284a-68cf-4545-
835a-464713e03dfc"},"owner":"root"}]
	I0814 09:57:52.710786  293459 cri.go:113] list returned 20 containers
	I0814 09:57:52.710801  293459 cri.go:116] container: {ID:0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918 Status:running}
	I0814 09:57:52.710815  293459 cri.go:116] container: {ID:19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223 Status:running}
	I0814 09:57:52.710821  293459 cri.go:116] container: {ID:31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e Status:running}
	I0814 09:57:52.710828  293459 cri.go:118] skipping 31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e - not in ps
	I0814 09:57:52.710839  293459 cri.go:116] container: {ID:38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0 Status:running}
	I0814 09:57:52.710846  293459 cri.go:118] skipping 38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0 - not in ps
	I0814 09:57:52.710855  293459 cri.go:116] container: {ID:54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801 Status:running}
	I0814 09:57:52.710861  293459 cri.go:118] skipping 54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801 - not in ps
	I0814 09:57:52.710870  293459 cri.go:116] container: {ID:586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898 Status:running}
	I0814 09:57:52.710877  293459 cri.go:118] skipping 586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898 - not in ps
	I0814 09:57:52.710885  293459 cri.go:116] container: {ID:81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804 Status:running}
	I0814 09:57:52.710891  293459 cri.go:118] skipping 81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804 - not in ps
	I0814 09:57:52.710900  293459 cri.go:116] container: {ID:82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557 Status:running}
	I0814 09:57:52.710910  293459 cri.go:116] container: {ID:844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414 Status:running}
	I0814 09:57:52.710916  293459 cri.go:118] skipping 844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414 - not in ps
	I0814 09:57:52.710925  293459 cri.go:116] container: {ID:9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526 Status:running}
	I0814 09:57:52.710931  293459 cri.go:116] container: {ID:a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836 Status:running}
	I0814 09:57:52.710940  293459 cri.go:118] skipping a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836 - not in ps
	I0814 09:57:52.710946  293459 cri.go:116] container: {ID:a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3 Status:running}
	I0814 09:57:52.710955  293459 cri.go:116] container: {ID:ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371 Status:running}
	I0814 09:57:52.710965  293459 cri.go:116] container: {ID:d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde Status:running}
	I0814 09:57:52.710973  293459 cri.go:116] container: {ID:d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4 Status:running}
	I0814 09:57:52.710982  293459 cri.go:118] skipping d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4 - not in ps
	I0814 09:57:52.710991  293459 cri.go:116] container: {ID:e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06 Status:running}
	I0814 09:57:52.711001  293459 cri.go:116] container: {ID:e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df Status:running}
	I0814 09:57:52.711007  293459 cri.go:118] skipping e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df - not in ps
	I0814 09:57:52.711015  293459 cri.go:116] container: {ID:ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229 Status:running}
	I0814 09:57:52.711025  293459 cri.go:118] skipping ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229 - not in ps
	I0814 09:57:52.711033  293459 cri.go:116] container: {ID:f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d Status:running}
	I0814 09:57:52.711043  293459 cri.go:116] container: {ID:f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82 Status:running}
	I0814 09:57:52.711056  293459 cri.go:118] skipping f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82 - not in ps
	I0814 09:57:52.711109  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918
	I0814 09:57:52.726835  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918 19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223
	I0814 09:57:52.740653  293459 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918 19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:57:52Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:57:53.017098  293459 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:53.026607  293459 pause.go:50] kubelet running: false
	I0814 09:57:53.026651  293459 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:57:53.126284  293459 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:57:53.126349  293459 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:57:53.198510  293459 cri.go:76] found id: "82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557"
	I0814 09:57:53.198536  293459 cri.go:76] found id: "0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918"
	I0814 09:57:53.198543  293459 cri.go:76] found id: "19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223"
	I0814 09:57:53.198549  293459 cri.go:76] found id: "f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d"
	I0814 09:57:53.198555  293459 cri.go:76] found id: "a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3"
	I0814 09:57:53.198561  293459 cri.go:76] found id: "ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371"
	I0814 09:57:53.198567  293459 cri.go:76] found id: "e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06"
	I0814 09:57:53.198573  293459 cri.go:76] found id: "d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde"
	I0814 09:57:53.198579  293459 cri.go:76] found id: "6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	I0814 09:57:53.198591  293459 cri.go:76] found id: "9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526"
	I0814 09:57:53.198598  293459 cri.go:76] found id: ""
	I0814 09:57:53.198638  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:57:53.239782  293459 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918","pid":5727,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918/rootfs","created":"2021-08-14T09:57:29.001157326Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223","pid":5420,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223","rootfs":"/run/containerd/io.containerd.runtime.v2.
task/k8s.io/19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223/rootfs","created":"2021-08-14T09:57:27.93721129Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","pid":5298,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e/rootfs","created":"2021-08-14T09:57:27.429031723Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-
9zklk_6f6c319c-8cf6-45c5-bba1-3a5999ff9a0e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","pid":4569,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0/rootfs","created":"2021-08-14T09:57:06.26100744Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210814095040-6746_50e2dfa2b1eaaff4e1c16b1d209880b0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","pid":6048,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.
io/54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801/rootfs","created":"2021-08-14T09:57:31.044969338Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-54btm_dfc36e40-fb64-41d8-a005-ab76555690d0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","pid":5290,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898/rootfs","created":"2021-08-14T09:57:27.29701866Z","annota
tions":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-klrbg_18dba609-fb6b-4895-aca7-2d94942571f6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","pid":4567,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804/rootfs","created":"2021-08-14T09:57:06.261015047Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210814095040-674
6_4ff167100744a55cb66874f5f0f5a8f3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557","pid":5947,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557/rootfs","created":"2021-08-14T09:57:30.817022661Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","pid":4588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","rootfs":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414/rootfs","created":"2021-08-14T09:57:06.261062989Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210814095040-6746_6140894add4347409ea837150aff8296"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526","pid":6178,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526/rootfs","created":"2021-08-14T09:57:31.800918203Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"cont
ainer","io.kubernetes.cri.sandbox-id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","pid":5664,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836/rootfs","created":"2021-08-14T09:57:28.527905284Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-zjjkn_50cc162c-7c79-4bcb-a514-12fbea928898"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3","pid":4706,"status":"running","bundle":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3/rootfs","created":"2021-08-14T09:57:06.548899793Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371","pid":4716,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371/rootfs","created":"2021-08-14T09:57:06.568991611Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.contai
ner-type":"container","io.kubernetes.cri.sandbox-id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde","pid":4715,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde/rootfs","created":"2021-08-14T09:57:06.568993452Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4","pid":6117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6fcb757f64e9d2cce8be6
6b8e6a95c66504da6dede6dd14334328a7f2c577b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4/rootfs","created":"2021-08-14T09:57:31.333152576Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-hjr7d_2936fbc6-dc7a-429f-b4ae-fa739e5e2c42"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06","pid":4699,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06/rootfs","created":"2021-08-14T09:57:06.568985137Z","annotations":{"io.kubernetes.cri.con
tainer-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","pid":4568,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df/rootfs","created":"2021-08-14T09:57:06.260986123Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210814095040-6746_929349d6f8b1131233aab1522615c193"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef0195e898ea1e50d58ed3fed8747f7ba2
7336b3c8dd4a32d0c0e074dca38229","pid":5915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229/rootfs","created":"2021-08-14T09:57:30.568975443Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c822637b-9c4e-48fa-ba25-77aeb1c4f4ad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d","pid":5330,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0f81c09afef0fe8ea79cd9491a2b047
484b1e6915ba7513902429bb9142f00d/rootfs","created":"2021-08-14T09:57:27.453031347Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","pid":5874,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82/rootfs","created":"2021-08-14T09:57:30.332983101Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-2ms26_0f0c284a-68cf-4545-8
35a-464713e03dfc"},"owner":"root"}]
	I0814 09:57:53.240013  293459 cri.go:113] list returned 20 containers
	I0814 09:57:53.240028  293459 cri.go:116] container: {ID:0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918 Status:paused}
	I0814 09:57:53.240042  293459 cri.go:122] skipping {0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918 paused}: state = "paused", want "running"
	I0814 09:57:53.240058  293459 cri.go:116] container: {ID:19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223 Status:running}
	I0814 09:57:53.240070  293459 cri.go:116] container: {ID:31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e Status:running}
	I0814 09:57:53.240077  293459 cri.go:118] skipping 31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e - not in ps
	I0814 09:57:53.240083  293459 cri.go:116] container: {ID:38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0 Status:running}
	I0814 09:57:53.240088  293459 cri.go:118] skipping 38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0 - not in ps
	I0814 09:57:53.240091  293459 cri.go:116] container: {ID:54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801 Status:running}
	I0814 09:57:53.240105  293459 cri.go:118] skipping 54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801 - not in ps
	I0814 09:57:53.240114  293459 cri.go:116] container: {ID:586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898 Status:running}
	I0814 09:57:53.240121  293459 cri.go:118] skipping 586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898 - not in ps
	I0814 09:57:53.240130  293459 cri.go:116] container: {ID:81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804 Status:running}
	I0814 09:57:53.240139  293459 cri.go:118] skipping 81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804 - not in ps
	I0814 09:57:53.240147  293459 cri.go:116] container: {ID:82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557 Status:running}
	I0814 09:57:53.240154  293459 cri.go:116] container: {ID:844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414 Status:running}
	I0814 09:57:53.240163  293459 cri.go:118] skipping 844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414 - not in ps
	I0814 09:57:53.240168  293459 cri.go:116] container: {ID:9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526 Status:running}
	I0814 09:57:53.240175  293459 cri.go:116] container: {ID:a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836 Status:running}
	I0814 09:57:53.240182  293459 cri.go:118] skipping a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836 - not in ps
	I0814 09:57:53.240190  293459 cri.go:116] container: {ID:a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3 Status:running}
	I0814 09:57:53.240196  293459 cri.go:116] container: {ID:ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371 Status:running}
	I0814 09:57:53.240207  293459 cri.go:116] container: {ID:d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde Status:running}
	I0814 09:57:53.240218  293459 cri.go:116] container: {ID:d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4 Status:running}
	I0814 09:57:53.240226  293459 cri.go:118] skipping d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4 - not in ps
	I0814 09:57:53.240234  293459 cri.go:116] container: {ID:e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06 Status:running}
	I0814 09:57:53.240240  293459 cri.go:116] container: {ID:e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df Status:running}
	I0814 09:57:53.240249  293459 cri.go:118] skipping e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df - not in ps
	I0814 09:57:53.240254  293459 cri.go:116] container: {ID:ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229 Status:running}
	I0814 09:57:53.240261  293459 cri.go:118] skipping ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229 - not in ps
	I0814 09:57:53.240269  293459 cri.go:116] container: {ID:f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d Status:running}
	I0814 09:57:53.240275  293459 cri.go:116] container: {ID:f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82 Status:running}
	I0814 09:57:53.240285  293459 cri.go:118] skipping f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82 - not in ps
	I0814 09:57:53.240325  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223
	I0814 09:57:53.257867  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223 82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557
	I0814 09:57:53.271380  293459 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223 82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:57:53Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0814 09:57:53.812096  293459 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:53.822458  293459 pause.go:50] kubelet running: false
	I0814 09:57:53.822508  293459 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0814 09:57:53.938478  293459 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0814 09:57:53.938564  293459 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0814 09:57:54.016510  293459 cri.go:76] found id: "82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557"
	I0814 09:57:54.016533  293459 cri.go:76] found id: "0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918"
	I0814 09:57:54.016541  293459 cri.go:76] found id: "19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223"
	I0814 09:57:54.016547  293459 cri.go:76] found id: "f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d"
	I0814 09:57:54.016552  293459 cri.go:76] found id: "a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3"
	I0814 09:57:54.016559  293459 cri.go:76] found id: "ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371"
	I0814 09:57:54.016570  293459 cri.go:76] found id: "e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06"
	I0814 09:57:54.016576  293459 cri.go:76] found id: "d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde"
	I0814 09:57:54.016581  293459 cri.go:76] found id: "6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	I0814 09:57:54.016591  293459 cri.go:76] found id: "9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526"
	I0814 09:57:54.016599  293459 cri.go:76] found id: ""
	I0814 09:57:54.016644  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0814 09:57:54.079286  293459 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918","pid":5727,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918/rootfs","created":"2021-08-14T09:57:29.001157326Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223","pid":5420,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223","rootfs":"/run/containerd/io.containerd.runtime.v2.t
ask/k8s.io/19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223/rootfs","created":"2021-08-14T09:57:27.93721129Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","pid":5298,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e/rootfs","created":"2021-08-14T09:57:27.429031723Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9
zklk_6f6c319c-8cf6-45c5-bba1-3a5999ff9a0e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","pid":4569,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0/rootfs","created":"2021-08-14T09:57:06.26100744Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210814095040-6746_50e2dfa2b1eaaff4e1c16b1d209880b0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","pid":6048,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.i
o/54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801/rootfs","created":"2021-08-14T09:57:31.044969338Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-54btm_dfc36e40-fb64-41d8-a005-ab76555690d0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","pid":5290,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898/rootfs","created":"2021-08-14T09:57:27.29701866Z","annotat
ions":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-klrbg_18dba609-fb6b-4895-aca7-2d94942571f6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","pid":4567,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804/rootfs","created":"2021-08-14T09:57:06.261015047Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210814095040-6746
_4ff167100744a55cb66874f5f0f5a8f3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557","pid":5947,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557/rootfs","created":"2021-08-14T09:57:30.817022661Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","pid":4588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414/rootfs","created":"2021-08-14T09:57:06.261062989Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210814095040-6746_6140894add4347409ea837150aff8296"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526","pid":6178,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526/rootfs","created":"2021-08-14T09:57:31.800918203Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"conta
iner","io.kubernetes.cri.sandbox-id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","pid":5664,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836/rootfs","created":"2021-08-14T09:57:28.527905284Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-zjjkn_50cc162c-7c79-4bcb-a514-12fbea928898"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3","pid":4706,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3/rootfs","created":"2021-08-14T09:57:06.548899793Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371","pid":4716,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371/rootfs","created":"2021-08-14T09:57:06.568991611Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.contain
er-type":"container","io.kubernetes.cri.sandbox-id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde","pid":4715,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde/rootfs","created":"2021-08-14T09:57:06.568993452Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4","pid":6117,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6fcb757f64e9d2cce8be66
b8e6a95c66504da6dede6dd14334328a7f2c577b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4/rootfs","created":"2021-08-14T09:57:31.333152576Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-hjr7d_2936fbc6-dc7a-429f-b4ae-fa739e5e2c42"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06","pid":4699,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06/rootfs","created":"2021-08-14T09:57:06.568985137Z","annotations":{"io.kubernetes.cri.cont
ainer-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","pid":4568,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df/rootfs","created":"2021-08-14T09:57:06.260986123Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210814095040-6746_929349d6f8b1131233aab1522615c193"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef0195e898ea1e50d58ed3fed8747f7ba27
336b3c8dd4a32d0c0e074dca38229","pid":5915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229/rootfs","created":"2021-08-14T09:57:30.568975443Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c822637b-9c4e-48fa-ba25-77aeb1c4f4ad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d","pid":5330,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0f81c09afef0fe8ea79cd9491a2b0474
84b1e6915ba7513902429bb9142f00d/rootfs","created":"2021-08-14T09:57:27.453031347Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","pid":5874,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82/rootfs","created":"2021-08-14T09:57:30.332983101Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-2ms26_0f0c284a-68cf-4545-83
5a-464713e03dfc"},"owner":"root"}]
	I0814 09:57:54.079569  293459 cri.go:113] list returned 20 containers
	I0814 09:57:54.079587  293459 cri.go:116] container: {ID:0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918 Status:paused}
	I0814 09:57:54.079604  293459 cri.go:122] skipping {0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918 paused}: state = "paused", want "running"
	I0814 09:57:54.079616  293459 cri.go:116] container: {ID:19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223 Status:paused}
	I0814 09:57:54.079623  293459 cri.go:122] skipping {19c2862b1cff841c844cb5fa281ed1ab370460da8395a9de5c91d298ff7fb223 paused}: state = "paused", want "running"
	I0814 09:57:54.079630  293459 cri.go:116] container: {ID:31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e Status:running}
	I0814 09:57:54.079640  293459 cri.go:118] skipping 31cc2c576d9a63f26c80fc650a8ea87afe9c3994ffdc8778685057954bfd0d1e - not in ps
	I0814 09:57:54.079649  293459 cri.go:116] container: {ID:38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0 Status:running}
	I0814 09:57:54.079656  293459 cri.go:118] skipping 38fa574413e2e213b874019b94da64da94b7c63ed4002b6dda4af4cbb45958c0 - not in ps
	I0814 09:57:54.079664  293459 cri.go:116] container: {ID:54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801 Status:running}
	I0814 09:57:54.079671  293459 cri.go:118] skipping 54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801 - not in ps
	I0814 09:57:54.079685  293459 cri.go:116] container: {ID:586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898 Status:running}
	I0814 09:57:54.079699  293459 cri.go:118] skipping 586824e601bb5366b2540c142b464cef675f1937ecffaa0d3fcf37148f19f898 - not in ps
	I0814 09:57:54.079704  293459 cri.go:116] container: {ID:81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804 Status:running}
	I0814 09:57:54.079711  293459 cri.go:118] skipping 81c3e2185d7c16c0669ee1f042ca298462e789bc5c6f9877513bf14379753804 - not in ps
	I0814 09:57:54.079719  293459 cri.go:116] container: {ID:82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557 Status:running}
	I0814 09:57:54.079725  293459 cri.go:116] container: {ID:844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414 Status:running}
	I0814 09:57:54.079737  293459 cri.go:118] skipping 844a229d51e27b8d3a25df324f6c9c1a75d47928a739f0e0adcca1103c6c2414 - not in ps
	I0814 09:57:54.079743  293459 cri.go:116] container: {ID:9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526 Status:running}
	I0814 09:57:54.079749  293459 cri.go:116] container: {ID:a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836 Status:running}
	I0814 09:57:54.079761  293459 cri.go:118] skipping a0ebd929835ad8fe3ca7807a0b61540d8a39201ee0d35ce20f82b05d659d7836 - not in ps
	I0814 09:57:54.079773  293459 cri.go:116] container: {ID:a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3 Status:running}
	I0814 09:57:54.079783  293459 cri.go:116] container: {ID:ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371 Status:running}
	I0814 09:57:54.079790  293459 cri.go:116] container: {ID:d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde Status:running}
	I0814 09:57:54.079796  293459 cri.go:116] container: {ID:d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4 Status:running}
	I0814 09:57:54.079803  293459 cri.go:118] skipping d6fcb757f64e9d2cce8be66b8e6a95c66504da6dede6dd14334328a7f2c577b4 - not in ps
	I0814 09:57:54.079811  293459 cri.go:116] container: {ID:e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06 Status:running}
	I0814 09:57:54.079817  293459 cri.go:116] container: {ID:e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df Status:running}
	I0814 09:57:54.079825  293459 cri.go:118] skipping e9c15c0af872affdcecec446480d2cd7de049082e799b6cc68896de9203c38df - not in ps
	I0814 09:57:54.079831  293459 cri.go:116] container: {ID:ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229 Status:running}
	I0814 09:57:54.079843  293459 cri.go:118] skipping ef0195e898ea1e50d58ed3fed8747f7ba27336b3c8dd4a32d0c0e074dca38229 - not in ps
	I0814 09:57:54.079851  293459 cri.go:116] container: {ID:f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d Status:running}
	I0814 09:57:54.079860  293459 cri.go:116] container: {ID:f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82 Status:running}
	I0814 09:57:54.079867  293459 cri.go:118] skipping f161e65b2772b9b492d328ad2a945ab58de5266f5be50044f83675254136ca82 - not in ps
	I0814 09:57:54.079914  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557
	I0814 09:57:54.098019  293459 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557 9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526
	I0814 09:57:54.113111  293459 out.go:177] 
	W0814 09:57:54.113273  293459 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557 9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:57:54Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557 9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:57:54Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0814 09:57:54.113292  293459 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0814 09:57:54.116341  293459 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0814 09:57:54.117715  293459 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210814095040-6746 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210814095040-6746
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210814095040-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551",
	        "Created": "2021-08-14T09:50:41.680931933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250731,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:52:08.402850153Z",
	            "FinishedAt": "2021-08-14T09:52:06.144304231Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/hostname",
	        "HostsPath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/hosts",
	        "LogPath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551-json.log",
	        "Name": "/default-k8s-different-port-20210814095040-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210814095040-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210814095040-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532/merged",
	                "UpperDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532/diff",
	                "WorkDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210814095040-6746",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210814095040-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210814095040-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210814095040-6746",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210814095040-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6885984f866462f51a690bbe5383d686ae40fd5953cb59fd00db1d47ffb40fbc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6885984f8664",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210814095040-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "981ed925d673"
	                    ],
	                    "NetworkID": "fcd9d5f352a71d72e683e953cda11a59709ddebb4388de429a5d199326a6eb94",
	                    "EndpointID": "a8a45f1062529330dec738132ba63556681fdd80b34dba7ff203c56603c04ef3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746: exit status 2 (367.206987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210814095040-6746 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20210814095040-6746 logs -n 25: (1.115599126s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:34 UTC | Sat, 14 Aug 2021 09:50:38 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:38 UTC | Sat, 14 Aug 2021 09:50:39 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20210814095039-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:39 UTC | Sat, 14 Aug 2021 09:50:40 UTC |
	|         | disable-driver-mounts-20210814095039-6746                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:50:56 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                            | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:06 UTC | Sat, 14 Aug 2021 09:51:07 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:08 UTC | Sat, 14 Aug 2021 09:51:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:40 UTC | Sat, 14 Aug 2021 09:51:36 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:51:45 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:52:06 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:04 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:54:08 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:08 UTC | Sat, 14 Aug 2021 09:54:09 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:09 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:04 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:29 UTC | Sat, 14 Aug 2021 09:55:32 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:32 UTC | Sat, 14 Aug 2021 09:55:33 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	| start   | -p auto-20210814093634-6746                                | auto-20210814093634-6746                       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:33 UTC | Sat, 14 Aug 2021 09:56:43 UTC |
	|         | --memory=2048                                              |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	| ssh     | -p auto-20210814093634-6746                                | auto-20210814093634-6746                       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:56:43 UTC | Sat, 14 Aug 2021 09:56:43 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |         |                               |                               |
	| delete  | -p auto-20210814093634-6746                                | auto-20210814093634-6746                       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:56:53 UTC | Sat, 14 Aug 2021 09:56:56 UTC |
	| start   | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:52:06 UTC | Sat, 14 Aug 2021 09:57:41 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:57:51 UTC | Sat, 14 Aug 2021 09:57:52 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:56:56
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:56:56.952852  282733 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:56:56.952933  282733 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:56.952943  282733 out.go:311] Setting ErrFile to fd 2...
	I0814 09:56:56.952948  282733 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:56.953064  282733 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:56:56.953314  282733 out.go:305] Setting JSON to false
	I0814 09:56:56.991428  282733 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5979,"bootTime":1628929038,"procs":249,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:56:56.991543  282733 start.go:121] virtualization: kvm guest
	I0814 09:56:56.994228  282733 out.go:177] * [custom-weave-20210814093636-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:56:56.995753  282733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:56:56.994374  282733 notify.go:169] Checking for updates...
	I0814 09:56:56.997263  282733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:56:56.998575  282733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:56:56.999879  282733 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:56:57.000361  282733 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:56:57.000448  282733 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:56:57.000519  282733 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:56:57.000554  282733 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:56:57.051337  282733 docker.go:132] docker version: linux-19.03.15
	I0814 09:56:57.051432  282733 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:56:57.135338  282733 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:56:57.089135632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:56:57.135446  282733 docker.go:244] overlay module found
	I0814 09:56:57.137780  282733 out.go:177] * Using the docker driver based on user configuration
	I0814 09:56:57.137807  282733 start.go:278] selected driver: docker
	I0814 09:56:57.137813  282733 start.go:751] validating driver "docker" against <nil>
	I0814 09:56:57.137834  282733 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:56:57.137882  282733 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:56:57.137902  282733 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:56:57.139342  282733 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:56:57.140124  282733 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:56:57.221839  282733 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:56:57.178208852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:56:57.221970  282733 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:56:57.222147  282733 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:56:57.222174  282733 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0814 09:56:57.222192  282733 start_flags.go:272] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0814 09:56:57.222204  282733 start_flags.go:277] config:
	{Name:custom-weave-20210814093636-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:56:57.225204  282733 out.go:177] * Starting control plane node custom-weave-20210814093636-6746 in cluster custom-weave-20210814093636-6746
	I0814 09:56:57.225246  282733 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:56:57.226685  282733 out.go:177] * Pulling base image ...
	I0814 09:56:57.226724  282733 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:56:57.226763  282733 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:56:57.226779  282733 cache.go:56] Caching tarball of preloaded images
	I0814 09:56:57.226817  282733 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:56:57.226937  282733 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:56:57.226957  282733 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:56:57.227074  282733 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/config.json ...
	I0814 09:56:57.227109  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/config.json: {Name:mkba4376994c19173a224130a1ac43ffb40a0b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:56:57.321736  282733 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:56:57.321772  282733 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:56:57.321792  282733 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:56:57.321845  282733 start.go:313] acquiring machines lock for custom-weave-20210814093636-6746: {Name:mk8a34e7e0bd18f9f8d5595fe521fee684812b37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:56:57.321979  282733 start.go:317] acquired machines lock for "custom-weave-20210814093636-6746" in 107.945µs
	I0814 09:56:57.322011  282733 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210814093636-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:56:57.322109  282733 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:56:58.189011  250455 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.435152346s)
	I0814 09:56:58.189096  250455 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0814 09:56:58.198874  250455 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:56:58.198933  250455 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:56:58.221441  250455 cri.go:76] found id: ""
	I0814 09:56:58.221508  250455 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:56:58.248974  250455 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:56:58.249022  250455 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:56:58.276981  250455 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:56:58.277025  250455 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:56:58.565330  250455 out.go:204]   - Generating certificates and keys ...
	I0814 09:56:59.462535  250455 out.go:204]   - Booting up control plane ...
	I0814 09:56:57.324903  282733 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0814 09:56:57.325198  282733 start.go:160] libmachine.API.Create for "custom-weave-20210814093636-6746" (driver="docker")
	I0814 09:56:57.325242  282733 client.go:168] LocalClient.Create starting
	I0814 09:56:57.325324  282733 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:56:57.325365  282733 main.go:130] libmachine: Decoding PEM data...
	I0814 09:56:57.325387  282733 main.go:130] libmachine: Parsing certificate...
	I0814 09:56:57.325533  282733 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:56:57.325558  282733 main.go:130] libmachine: Decoding PEM data...
	I0814 09:56:57.325579  282733 main.go:130] libmachine: Parsing certificate...
	I0814 09:56:57.325972  282733 cli_runner.go:115] Run: docker network inspect custom-weave-20210814093636-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:56:57.365149  282733 cli_runner.go:162] docker network inspect custom-weave-20210814093636-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:56:57.365227  282733 network_create.go:255] running [docker network inspect custom-weave-20210814093636-6746] to gather additional debugging logs...
	I0814 09:56:57.365253  282733 cli_runner.go:115] Run: docker network inspect custom-weave-20210814093636-6746
	W0814 09:56:57.403769  282733 cli_runner.go:162] docker network inspect custom-weave-20210814093636-6746 returned with exit code 1
	I0814 09:56:57.403798  282733 network_create.go:258] error running [docker network inspect custom-weave-20210814093636-6746]: docker network inspect custom-weave-20210814093636-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20210814093636-6746
	I0814 09:56:57.403826  282733 network_create.go:260] output of [docker network inspect custom-weave-20210814093636-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20210814093636-6746
	
	** /stderr **
	I0814 09:56:57.403873  282733 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:56:57.443695  282733 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcd9d5f352a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fd:56:42:d1}}
	I0814 09:56:57.444831  282733 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00061e030] misses:0}
	I0814 09:56:57.444886  282733 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:56:57.444900  282733 network_create.go:106] attempt to create docker network custom-weave-20210814093636-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:56:57.444949  282733 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210814093636-6746
	I0814 09:56:57.517061  282733 network_create.go:90] docker network custom-weave-20210814093636-6746 192.168.58.0/24 created
	I0814 09:56:57.517091  282733 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20210814093636-6746" container
	I0814 09:56:57.517209  282733 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:56:57.561425  282733 cli_runner.go:115] Run: docker volume create custom-weave-20210814093636-6746 --label name.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:56:57.604821  282733 oci.go:102] Successfully created a docker volume custom-weave-20210814093636-6746
	I0814 09:56:57.604920  282733 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210814093636-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --entrypoint /usr/bin/test -v custom-weave-20210814093636-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:56:58.406935  282733 oci.go:106] Successfully prepared a docker volume custom-weave-20210814093636-6746
	W0814 09:56:58.406995  282733 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:56:58.407005  282733 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:56:58.407006  282733 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:56:58.407040  282733 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:56:58.407064  282733 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:56:58.407091  282733 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210814093636-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:56:58.495279  282733 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210814093636-6746 --name custom-weave-20210814093636-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --network custom-weave-20210814093636-6746 --ip 192.168.58.2 --volume custom-weave-20210814093636-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:56:59.044706  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Running}}
	I0814 09:56:59.097474  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:56:59.159235  282733 cli_runner.go:115] Run: docker exec custom-weave-20210814093636-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:56:59.310241  282733 oci.go:278] the created container "custom-weave-20210814093636-6746" has a running status.
	I0814 09:56:59.310278  282733 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa...
	I0814 09:56:59.551124  282733 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:56:59.987572  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:00.030969  282733 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:57:00.030990  282733 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210814093636-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:57:02.834646  282733 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210814093636-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427486341s)
	I0814 09:57:02.834685  282733 kic.go:188] duration metric: took 4.427644 seconds to extract preloaded images to volume
	I0814 09:57:02.834765  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:02.873812  282733 machine.go:88] provisioning docker machine ...
	I0814 09:57:02.873847  282733 ubuntu.go:169] provisioning hostname "custom-weave-20210814093636-6746"
	I0814 09:57:02.873913  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:02.910725  282733 main.go:130] libmachine: Using SSH client type: native
	I0814 09:57:02.910963  282733 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I0814 09:57:02.910988  282733 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20210814093636-6746 && echo "custom-weave-20210814093636-6746" | sudo tee /etc/hostname
	I0814 09:57:03.093734  282733 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20210814093636-6746
	
	I0814 09:57:03.093800  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.134690  282733 main.go:130] libmachine: Using SSH client type: native
	I0814 09:57:03.134852  282733 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I0814 09:57:03.134878  282733 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20210814093636-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210814093636-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20210814093636-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:57:03.260393  282733 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:57:03.260428  282733 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:57:03.260450  282733 ubuntu.go:177] setting up certificates
	I0814 09:57:03.260459  282733 provision.go:83] configureAuth start
	I0814 09:57:03.260507  282733 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210814093636-6746
	I0814 09:57:03.299898  282733 provision.go:138] copyHostCerts
	I0814 09:57:03.299951  282733 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:57:03.299958  282733 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:57:03.300017  282733 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:57:03.300102  282733 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:57:03.300114  282733 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:57:03.300141  282733 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:57:03.300211  282733 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:57:03.300220  282733 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:57:03.300241  282733 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:57:03.300282  282733 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20210814093636-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210814093636-6746]
	I0814 09:57:03.445860  282733 provision.go:172] copyRemoteCerts
	I0814 09:57:03.445907  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:57:03.445946  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.485547  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:03.575579  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:57:03.592246  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0814 09:57:03.607659  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:57:03.623504  282733 provision.go:86] duration metric: configureAuth took 363.034675ms
	I0814 09:57:03.623522  282733 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:57:03.623672  282733 config.go:177] Loaded profile config "custom-weave-20210814093636-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:57:03.623683  282733 machine.go:91] provisioned docker machine in 749.85194ms
	I0814 09:57:03.623690  282733 client.go:171] LocalClient.Create took 6.298439787s
	I0814 09:57:03.623709  282733 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210814093636-6746" took 6.298512324s
	I0814 09:57:03.623721  282733 start.go:267] post-start starting for "custom-weave-20210814093636-6746" (driver="docker")
	I0814 09:57:03.623731  282733 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:57:03.623783  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:57:03.623829  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.662518  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:03.751889  282733 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:57:03.754499  282733 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:57:03.754520  282733 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:57:03.754531  282733 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:57:03.754536  282733 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:57:03.754544  282733 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:57:03.754583  282733 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:57:03.754678  282733 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:57:03.754818  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:57:03.761547  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:57:03.778521  282733 start.go:270] post-start completed in 154.78447ms
	I0814 09:57:03.778884  282733 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210814093636-6746
	I0814 09:57:03.817300  282733 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/config.json ...
	I0814 09:57:03.817501  282733 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:57:03.817539  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.856432  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:03.940575  282733 start.go:129] duration metric: createHost completed in 6.618453785s
	I0814 09:57:03.940602  282733 start.go:80] releasing machines lock for "custom-weave-20210814093636-6746", held for 6.618608448s
	I0814 09:57:03.940675  282733 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210814093636-6746
	I0814 09:57:03.978456  282733 ssh_runner.go:149] Run: systemctl --version
	I0814 09:57:03.978510  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.978523  282733 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:57:03.978613  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:04.019693  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:04.020044  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:04.108287  282733 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:57:04.117763  282733 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:57:04.134850  282733 docker.go:153] disabling docker service ...
	I0814 09:57:04.134892  282733 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:57:04.150707  282733 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:57:04.158955  282733 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:57:04.220536  282733 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:57:04.279074  282733 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:57:04.287628  282733 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:57:04.299359  282733 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:57:04.312669  282733 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:57:04.319068  282733 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:57:04.319116  282733 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:57:04.325942  282733 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:57:04.332327  282733 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:57:04.386037  282733 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:57:04.450836  282733 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:57:04.450903  282733 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:57:04.454199  282733 start.go:413] Will wait 60s for crictl version
	I0814 09:57:04.454251  282733 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:57:04.477022  282733 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:57:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:57:13.511644  250455 out.go:204]   - Configuring RBAC rules ...
	I0814 09:57:13.923248  250455 cni.go:93] Creating CNI manager for ""
	I0814 09:57:13.923271  250455 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:57:15.526541  282733 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:57:15.614160  282733 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:57:15.614216  282733 ssh_runner.go:149] Run: containerd --version
	I0814 09:57:15.636518  282733 ssh_runner.go:149] Run: containerd --version
	I0814 09:57:13.924971  250455 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:57:13.925044  250455 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:57:13.928570  250455 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:57:13.928591  250455 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:57:13.941040  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:57:14.170210  250455 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:57:14.170256  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:14.170293  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=default-k8s-different-port-20210814095040-6746 minikube.k8s.io/updated_at=2021_08_14T09_57_14_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:14.186307  250455 ops.go:34] apiserver oom_adj: -16
	I0814 09:57:14.302328  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:14.880772  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:15.381100  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:15.880902  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:16.380680  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:15.659220  282733 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:57:15.659305  282733 cli_runner.go:115] Run: docker network inspect custom-weave-20210814093636-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:57:15.700463  282733 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:57:15.703756  282733 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:57:15.712649  282733 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:57:15.712716  282733 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:57:15.735226  282733 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:57:15.735242  282733 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:57:15.735277  282733 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:57:15.755650  282733 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:57:15.755667  282733 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:57:15.755708  282733 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:57:15.776315  282733 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0814 09:57:15.776343  282733 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:57:15.776355  282733 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210814093636-6746 NodeName:custom-weave-20210814093636-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:57:15.776470  282733 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "custom-weave-20210814093636-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:57:15.776545  282733 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20210814093636-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0814 09:57:15.776582  282733 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:57:15.782969  282733 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:57:15.783027  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:57:15.789022  282733 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (546 bytes)
	I0814 09:57:15.800510  282733 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:57:15.811775  282733 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2082 bytes)
	I0814 09:57:15.823033  282733 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:57:15.825569  282733 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:57:15.834080  282733 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746 for IP: 192.168.58.2
	I0814 09:57:15.834123  282733 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:57:15.834140  282733 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:57:15.834183  282733 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.key
	I0814 09:57:15.834194  282733 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt with IP's: []
	I0814 09:57:16.025555  282733 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt ...
	I0814 09:57:16.025580  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: {Name:mk3a7b86b266f1b42b6ee6625378c03e373788ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.025769  282733 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.key ...
	I0814 09:57:16.025786  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.key: {Name:mk6dc3f7832c9b7557e57f6e827aea7c6af8ba28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.025890  282733 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041
	I0814 09:57:16.025901  282733 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:57:16.192301  282733 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041 ...
	I0814 09:57:16.192332  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041: {Name:mkbc64c1851ab5694e391aa9b9fee02fcc097261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.192506  282733 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041 ...
	I0814 09:57:16.192522  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041: {Name:mke63e08f1a1ec5808cd3f6054f1a872043945b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.192617  282733 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt
	I0814 09:57:16.192708  282733 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key
	I0814 09:57:16.192790  282733 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key
	I0814 09:57:16.192819  282733 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt with IP's: []
	I0814 09:57:16.466484  282733 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt ...
	I0814 09:57:16.466509  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt: {Name:mk15d3e1eab26da19f8f832875268908ff89a17f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.466662  282733 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key ...
	I0814 09:57:16.466676  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key: {Name:mk79da5996dabdc5d072ddc40c197548e809f24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.466845  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:57:16.466916  282733 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:57:16.466928  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:57:16.466953  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:57:16.466976  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:57:16.466999  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:57:16.467044  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:57:16.467987  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:57:16.484684  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:57:16.500174  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:57:16.515342  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:57:16.530218  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:57:16.545318  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:57:16.560434  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:57:16.575191  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:57:16.590133  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:57:16.604812  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:57:16.619537  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:57:16.634617  282733 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:57:16.645623  282733 ssh_runner.go:149] Run: openssl version
	I0814 09:57:16.650009  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:57:16.656351  282733 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:57:16.659091  282733 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:57:16.659131  282733 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:57:16.663354  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:57:16.669677  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:57:16.675936  282733 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:57:16.678661  282733 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:57:16.678700  282733 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:57:16.682900  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:57:16.689252  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:57:16.695761  282733 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:57:16.698452  282733 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:57:16.698494  282733 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:57:16.702666  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:57:16.708960  282733 kubeadm.go:390] StartCluster: {Name:custom-weave-20210814093636-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:57:16.709045  282733 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:57:16.709088  282733 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:57:16.732120  282733 cri.go:76] found id: ""
	I0814 09:57:16.732197  282733 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:57:16.738334  282733 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:57:16.744477  282733 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:57:16.744512  282733 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:57:16.750517  282733 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:57:16.750553  282733 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:57:16.880565  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:17.380815  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:17.881015  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:18.380420  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:18.881004  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:19.380678  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:19.880259  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:20.381147  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:20.881055  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:21.380516  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:21.880505  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:22.381110  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:22.880665  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:23.380542  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:23.880870  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:24.380779  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:24.880701  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:25.380927  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:25.881137  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:26.381040  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:26.881030  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:26.979312  250455 kubeadm.go:985] duration metric: took 12.809102419s to wait for elevateKubeSystemPrivileges.
	I0814 09:57:26.979339  250455 kubeadm.go:392] StartCluster complete in 5m2.547838589s
	I0814 09:57:26.979355  250455 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:26.979427  250455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:57:26.980036  250455 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:27.496347  250455 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210814095040-6746" rescaled to 1
	I0814 09:57:27.496422  250455 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:57:27.496452  250455 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:57:27.498190  250455 out.go:177] * Verifying Kubernetes components...
	I0814 09:57:27.498249  250455 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:27.496632  250455 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0814 09:57:27.496855  250455 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:57:27.498326  250455 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498336  250455 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498345  250455 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498355  250455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498363  250455 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.498375  250455 addons.go:147] addon metrics-server should already be in state true
	I0814 09:57:27.498411  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.498348  250455 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.498683  250455 addons.go:147] addon dashboard should already be in state true
	I0814 09:57:27.498703  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.498716  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.498948  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.498328  250455 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.499129  250455 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.499140  250455 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:57:27.499168  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.499238  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.499649  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.591135  250455 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.591165  250455 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:57:27.591195  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.591734  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.594700  250455 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0814 09:57:27.596282  250455 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0814 09:57:27.596336  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 09:57:27.596350  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0814 09:57:27.597812  250455 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0814 09:57:27.597865  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0814 09:57:27.597884  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0814 09:57:27.596396  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.597929  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.599764  250455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:57:27.599884  250455 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:57:27.599902  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:57:27.599954  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.665084  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.669545  250455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210814095040-6746" to be "Ready" ...
	I0814 09:57:27.673504  250455 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:57:27.675654  250455 node_ready.go:49] node "default-k8s-different-port-20210814095040-6746" has status "Ready":"True"
	I0814 09:57:27.675673  250455 node_ready.go:38] duration metric: took 6.100787ms waiting for node "default-k8s-different-port-20210814095040-6746" to be "Ready" ...
	I0814 09:57:27.675684  250455 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:57:27.678615  250455 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:57:27.678634  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:57:27.678689  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.682437  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.682707  250455 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-psntz" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:27.727536  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.743432  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.848313  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 09:57:27.848337  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0814 09:57:27.854391  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0814 09:57:27.854411  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0814 09:57:27.923471  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0814 09:57:27.923548  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0814 09:57:27.925638  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 09:57:27.925683  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0814 09:57:27.941227  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:57:28.005583  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:57:28.013191  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0814 09:57:28.013221  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0814 09:57:28.036868  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:57:28.036897  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0814 09:57:28.134690  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0814 09:57:28.134715  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0814 09:57:28.138410  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:57:28.221543  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0814 09:57:28.221566  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0814 09:57:28.336057  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0814 09:57:28.336082  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0814 09:57:28.402988  250455 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0814 09:57:28.438766  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0814 09:57:28.438793  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0814 09:57:28.518405  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0814 09:57:28.518431  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0814 09:57:28.609860  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:57:28.609885  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0814 09:57:28.628049  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:57:29.524651  250455 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.519031606s)
	I0814 09:57:29.602043  250455 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.463594755s)
	I0814 09:57:29.602084  250455 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:29.713541  250455 pod_ready.go:102] pod "coredns-558bd4d5db-psntz" in "kube-system" namespace has status "Ready":"False"
	I0814 09:57:30.127247  250455 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.499142253s)
	I0814 09:57:30.129123  250455 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0814 09:57:30.129152  250455 addons.go:344] enableAddons completed in 2.632529919s
	I0814 09:57:31.211525  250455 pod_ready.go:97] error getting pod "coredns-558bd4d5db-psntz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-psntz" not found
	I0814 09:57:31.211556  250455 pod_ready.go:81] duration metric: took 3.528824609s waiting for pod "coredns-558bd4d5db-psntz" in "kube-system" namespace to be "Ready" ...
	E0814 09:57:31.211569  250455 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-psntz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-psntz" not found
	I0814 09:57:31.211578  250455 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:34.023009  250455 pod_ready.go:102] pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace has status "Ready":"False"
	I0814 09:57:41.180448  282733 out.go:204]   - Generating certificates and keys ...
	I0814 09:57:41.183353  282733 out.go:204]   - Booting up control plane ...
	I0814 09:57:41.185840  282733 out.go:204]   - Configuring RBAC rules ...
	I0814 09:57:41.187901  282733 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0814 09:57:39.718572  250455 pod_ready.go:102] pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace has status "Ready":"False"
	I0814 09:57:40.222423  250455 pod_ready.go:92] pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.222453  250455 pod_ready.go:81] duration metric: took 9.010866627s waiting for pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.222466  250455 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.226690  250455 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.226711  250455 pod_ready.go:81] duration metric: took 4.234825ms waiting for pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.226723  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.230822  250455 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.230839  250455 pod_ready.go:81] duration metric: took 4.107139ms waiting for pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.230851  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.235010  250455 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.235026  250455 pod_ready.go:81] duration metric: took 4.165899ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.235038  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-klrbg" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.239012  250455 pod_ready.go:92] pod "kube-proxy-klrbg" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.239029  250455 pod_ready.go:81] duration metric: took 3.983115ms waiting for pod "kube-proxy-klrbg" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.239040  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.620556  250455 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.620576  250455 pod_ready.go:81] duration metric: took 381.52642ms waiting for pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.620587  250455 pod_ready.go:38] duration metric: took 12.944887405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:57:40.620609  250455 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:57:40.620655  250455 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:57:40.644916  250455 api_server.go:70] duration metric: took 13.148457169s to wait for apiserver process to appear ...
	I0814 09:57:40.644941  250455 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:57:40.644952  250455 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0814 09:57:40.649438  250455 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0814 09:57:40.650227  250455 api_server.go:139] control plane version: v1.21.3
	I0814 09:57:40.650250  250455 api_server.go:129] duration metric: took 5.303285ms to wait for apiserver health ...
	I0814 09:57:40.650259  250455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:57:40.823310  250455 system_pods.go:59] 9 kube-system pods found
	I0814 09:57:40.823342  250455 system_pods.go:61] "coredns-558bd4d5db-zjjkn" [50cc162c-7c79-4bcb-a514-12fbea928898] Running
	I0814 09:57:40.823350  250455 system_pods.go:61] "etcd-default-k8s-different-port-20210814095040-6746" [c73a55e6-808a-4f05-85fb-040eb29a0cfa] Running
	I0814 09:57:40.823355  250455 system_pods.go:61] "kindnet-9zklk" [6f6c319c-8cf6-45c5-bba1-3a5999ff9a0e] Running
	I0814 09:57:40.823361  250455 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210814095040-6746" [df9f3575-c3f6-4921-8412-587a5aad2918] Running
	I0814 09:57:40.823372  250455 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210814095040-6746" [bf84559d-78a0-467e-8361-cf4b0badb18e] Running
	I0814 09:57:40.823383  250455 system_pods.go:61] "kube-proxy-klrbg" [18dba609-fb6b-4895-aca7-2d94942571f6] Running
	I0814 09:57:40.823390  250455 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210814095040-6746" [0a96c1f2-97cd-468f-9b5c-55bd7c0ad18c] Running
	I0814 09:57:40.823404  250455 system_pods.go:61] "metrics-server-7c784ccb57-2ms26" [0f0c284a-68cf-4545-835a-464713e03dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:57:40.823416  250455 system_pods.go:61] "storage-provisioner" [c822637b-9c4e-48fa-ba25-77aeb1c4f4ad] Running
	I0814 09:57:40.823428  250455 system_pods.go:74] duration metric: took 173.162554ms to wait for pod list to return data ...
	I0814 09:57:40.823439  250455 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:57:41.021831  250455 default_sa.go:45] found service account: "default"
	I0814 09:57:41.021855  250455 default_sa.go:55] duration metric: took 198.408847ms for default service account to be created ...
	I0814 09:57:41.021865  250455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:57:41.222890  250455 system_pods.go:86] 9 kube-system pods found
	I0814 09:57:41.222916  250455 system_pods.go:89] "coredns-558bd4d5db-zjjkn" [50cc162c-7c79-4bcb-a514-12fbea928898] Running
	I0814 09:57:41.222925  250455 system_pods.go:89] "etcd-default-k8s-different-port-20210814095040-6746" [c73a55e6-808a-4f05-85fb-040eb29a0cfa] Running
	I0814 09:57:41.222932  250455 system_pods.go:89] "kindnet-9zklk" [6f6c319c-8cf6-45c5-bba1-3a5999ff9a0e] Running
	I0814 09:57:41.222939  250455 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210814095040-6746" [df9f3575-c3f6-4921-8412-587a5aad2918] Running
	I0814 09:57:41.222950  250455 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210814095040-6746" [bf84559d-78a0-467e-8361-cf4b0badb18e] Running
	I0814 09:57:41.222957  250455 system_pods.go:89] "kube-proxy-klrbg" [18dba609-fb6b-4895-aca7-2d94942571f6] Running
	I0814 09:57:41.222967  250455 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210814095040-6746" [0a96c1f2-97cd-468f-9b5c-55bd7c0ad18c] Running
	I0814 09:57:41.222981  250455 system_pods.go:89] "metrics-server-7c784ccb57-2ms26" [0f0c284a-68cf-4545-835a-464713e03dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:57:41.222991  250455 system_pods.go:89] "storage-provisioner" [c822637b-9c4e-48fa-ba25-77aeb1c4f4ad] Running
	I0814 09:57:41.223003  250455 system_pods.go:126] duration metric: took 201.132399ms to wait for k8s-apps to be running ...
	I0814 09:57:41.223013  250455 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:57:41.223060  250455 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:41.236368  250455 system_svc.go:56] duration metric: took 13.34901ms WaitForService to wait for kubelet.
	I0814 09:57:41.236397  250455 kubeadm.go:547] duration metric: took 13.7399425s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:57:41.236423  250455 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:57:41.420718  250455 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:57:41.420740  250455 node_conditions.go:123] node cpu capacity is 8
	I0814 09:57:41.420750  250455 node_conditions.go:105] duration metric: took 184.321757ms to run NodePressure ...
	I0814 09:57:41.420760  250455 start.go:231] waiting for startup goroutines ...
	I0814 09:57:41.486511  250455 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:57:41.488533  250455 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210814095040-6746" cluster and "default" namespace by default
	I0814 09:57:41.189297  282733 out.go:177] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0814 09:57:41.189345  282733 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:57:41.189393  282733 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0814 09:57:41.192496  282733 ssh_runner.go:306] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0814 09:57:41.192516  282733 ssh_runner.go:316] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0814 09:57:41.208661  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:57:41.673507  282733 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:57:41.673570  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:41.673570  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=custom-weave-20210814093636-6746 minikube.k8s.io/updated_at=2021_08_14T09_57_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:41.753402  282733 ops.go:34] apiserver oom_adj: -16
	I0814 09:57:41.753417  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:42.333652  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:42.833306  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:43.333408  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:43.834059  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:44.333958  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:44.834039  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:45.333998  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:45.834028  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:46.333405  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:46.833848  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:47.333994  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:47.833870  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:48.333432  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:48.834179  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:49.333581  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:49.833252  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:50.333209  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:50.834002  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:51.333661  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:51.833244  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:52.333474  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:52.833971  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:53.333877  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:53.833352  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:53.907593  282733 kubeadm.go:985] duration metric: took 12.234091569s to wait for elevateKubeSystemPrivileges.
	I0814 09:57:53.907642  282733 kubeadm.go:392] StartCluster complete in 37.198685034s
	I0814 09:57:53.907663  282733 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:53.907758  282733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:57:53.911152  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:54.433928  282733 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20210814093636-6746" rescaled to 1
	I0814 09:57:54.433981  282733 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:57:54.436677  282733 out.go:177] * Verifying Kubernetes components...
	I0814 09:57:54.436729  282733 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:54.434039  282733 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:57:54.434070  282733 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0814 09:57:54.436859  282733 addons.go:59] Setting storage-provisioner=true in profile "custom-weave-20210814093636-6746"
	I0814 09:57:54.436883  282733 addons.go:135] Setting addon storage-provisioner=true in "custom-weave-20210814093636-6746"
	W0814 09:57:54.436890  282733 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:57:54.434201  282733 config.go:177] Loaded profile config "custom-weave-20210814093636-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:57:54.436923  282733 host.go:66] Checking if "custom-weave-20210814093636-6746" exists ...
	I0814 09:57:54.436921  282733 addons.go:59] Setting default-storageclass=true in profile "custom-weave-20210814093636-6746"
	I0814 09:57:54.436954  282733 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20210814093636-6746"
	I0814 09:57:54.437268  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:54.437444  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	6c854f8b0cd1a       523cad1a4df73       13 seconds ago      Exited              dashboard-metrics-scraper   1                   54f901300abf8
	9d1091049f869       9a07b5b4bfac0       23 seconds ago      Running             kubernetes-dashboard        0                   d6fcb757f64e9
	82e9e90eeb526       6e38f40d628db       24 seconds ago      Running             storage-provisioner         0                   ef0195e898ea1
	0098384a3b4ed       296a6d5035e2d       26 seconds ago      Running             coredns                     0                   a0ebd929835ad
	19c2862b1cff8       6de166512aa22       27 seconds ago      Running             kindnet-cni                 0                   31cc2c576d9a6
	f0f81c09afef0       adb2816ea823a       27 seconds ago      Running             kube-proxy                  0                   586824e601bb5
	a49b6961ce490       6be0dc1302e30       48 seconds ago      Running             kube-scheduler              0                   844a229d51e27
	ccaca53fc4432       0369cf4303ffd       48 seconds ago      Running             etcd                        0                   e9c15c0af872a
	e79c62702ba4f       3d174f00aa39e       48 seconds ago      Running             kube-apiserver              0                   38fa574413e2e
	d1f9978c5866c       bc2bb319a7038       48 seconds ago      Running             kube-controller-manager     0                   81c3e2185d7c1
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:52:08 UTC, end at Sat 2021-08-14 09:57:55 UTC. --
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.804707188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.804940948Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.806715419Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.837037262Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.837509849Z" level=info msg="StartContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.016938952Z" level=info msg="StartContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\" returns successfully"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.057166967Z" level=info msg="Finish piping stderr of container \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.057195766Z" level=info msg="Finish piping stdout of container \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.058575673Z" level=info msg="TaskExit event &TaskExit{ContainerID:a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06,ID:a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06,Pid:6324,ExitStatus:1,ExitedAt:2021-08-14 09:57:41.058354254 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.101787134Z" level=info msg="shim disconnected" id=a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.101869058Z" level=error msg="copy shim log" error="read /proc/self/fd/138: file already closed"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.226018904Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.261395257Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.261888663Z" level=info msg="StartContainer for \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.427034214Z" level=info msg="StartContainer for \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\" returns successfully"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.473230092Z" level=info msg="Finish piping stdout of container \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.473287946Z" level=info msg="Finish piping stderr of container \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.474045099Z" level=info msg="TaskExit event &TaskExit{ContainerID:6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c,ID:6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c,Pid:6393,ExitStatus:1,ExitedAt:2021-08-14 09:57:41.473801024 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.529700931Z" level=info msg="shim disconnected" id=6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.529790521Z" level=error msg="copy shim log" error="read /proc/self/fd/138: file already closed"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:42.228137269Z" level=info msg="RemoveContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:42.233292081Z" level=info msg="RemoveContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\" returns successfully"
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:44.024197104Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:44.078650284Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" host=fake.domain
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:44.079927287Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	
	* 
	* ==> coredns [0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210814095040-6746
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210814095040-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=default-k8s-different-port-20210814095040-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_57_14_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:57:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210814095040-6746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Aug 2021 09:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20210814095040-6746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                18f93bb6-a3ab-4de6-8ec2-f7bfb51bb31f
	  Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-zjjkn                                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     28s
	  kube-system                 etcd-default-k8s-different-port-20210814095040-6746                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         36s
	  kube-system                 kindnet-9zklk                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      29s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210814095040-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210814095040-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-proxy-klrbg                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210814095040-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 metrics-server-7c784ccb57-2ms26                                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         26s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-54btm                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-hjr7d                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  50s (x4 over 50s)  kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x4 over 50s)  kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x4 over 50s)  kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                29s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeReady
	  Normal  Starting                 28s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000025] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[Aug14 09:53] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:54] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:55] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:56] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth7345da06
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 92 45 4c d4 7c ac 08 06        .......EL.|...
	[  +3.039718] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth23b7056c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff da 36 50 a4 55 3b 08 06        .......6P.U;..
	[ +14.868976] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:57] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethf06170b5
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e 2a b3 8e d4 88 08 06        ......^*......
	[  +2.040503] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb67aee0d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff a6 9b ec 13 d9 b6 08 06        ..............
	[  +0.704016] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth708a0835
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 86 ba f1 19 7b e5 08 06        ..........{...
	[  +0.299880] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth6d17692c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 7e 73 53 da e8 08 06        .......~sS....
	[ +23.664383] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ee 66 27 07 a4 b6 08 06        .......f'.....
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff ee 66 27 07 a4 b6 08 06        .......f'.....
	
	* 
	* ==> etcd [ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371] <==
	* 2021-08-14 09:57:07.306983 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-14 09:57:07.307052 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:57:21.433489 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:57:30.840860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:57:34.018467 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (800.716482ms) to execute
	2021-08-14 09:57:34.018574 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2566" took too long (862.180967ms) to execute
	2021-08-14 09:57:35.273417 W | wal: sync duration of 1.246803688s, expected less than 1s
	2021-08-14 09:57:35.273739 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.220497259s) to execute
	2021-08-14 09:57:35.274091 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (1.055886126s) to execute
	2021-08-14 09:57:35.274654 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1144" took too long (387.97457ms) to execute
	2021-08-14 09:57:35.274689 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-default-k8s-different-port-20210814095040-6746\" " with result "range_response_count:1 size:5282" took too long (1.245119574s) to execute
	2021-08-14 09:57:36.613195 W | wal: sync duration of 1.072608169s, expected less than 1s
	2021-08-14 09:57:36.614034 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (896.441878ms) to execute
	2021-08-14 09:57:38.053672 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000074231s) to execute
	WARNING: 2021/08/14 09:57:38 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-14 09:57:38.575258 W | wal: sync duration of 1.961929212s, expected less than 1s
	2021-08-14 09:57:39.712059 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.651876595s) to execute
	2021-08-14 09:57:39.712497 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.307445228s) to execute
	2021-08-14 09:57:39.712760 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1144" took too long (2.426311943s) to execute
	2021-08-14 09:57:39.712852 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5067" took too long (1.389317314s) to execute
	2021-08-14 09:57:39.712973 W | etcdserver: read-only range request "key:\"/registry/minions/default-k8s-different-port-20210814095040-6746\" " with result "range_response_count:1 size:5067" took too long (3.095121193s) to execute
	2021-08-14 09:57:39.713398 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (3.091022166s) to execute
	2021-08-14 09:57:39.713655 W | etcdserver: request "header:<ID:8128006959887004037 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20210814095040-6746\" mod_revision:492 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20210814095040-6746\" value_size:645 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20210814095040-6746\" > >>" with result "size:16" took too long (175.862849ms) to execute
	2021-08-14 09:57:40.841109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:57:50.841460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  09:57:55 up  1:40,  0 users,  load average: 3.02, 2.04, 1.83
	Linux default-k8s-different-port-20210814095040-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06] <==
	* Trace[926388937]: ---"Transaction committed" 1333ms (09:57:00.616)
	Trace[926388937]: [1.336460324s] [1.336460324s] END
	I0814 09:57:36.616543       1 trace.go:205] Trace[1163206533]: "Patch" url:/api/v1/namespaces/kube-system/pods/etcd-default-k8s-different-port-20210814095040-6746/status,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:57:35.279) (total time: 1336ms):
	Trace[1163206533]: ---"Object stored in database" 1334ms (09:57:00.616)
	Trace[1163206533]: [1.336880178s] [1.336880178s] END
	I0814 09:57:38.576856       1 trace.go:205] Trace[1835893352]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:57:38.060) (total time: 516ms):
	Trace[1835893352]: ---"Object stored in database" 515ms (09:57:00.576)
	Trace[1835893352]: [516.030459ms] [516.030459ms] END
	I0814 09:57:39.205281       1 client.go:360] parsed scheme: "passthrough"
	I0814 09:57:39.205324       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0814 09:57:39.205334       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0814 09:57:39.714935       1 trace.go:205] Trace[1543031706]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:57:37.285) (total time: 2428ms):
	Trace[1543031706]: ---"About to write a response" 2427ms (09:57:00.713)
	Trace[1543031706]: [2.42899888s] [2.42899888s] END
	I0814 09:57:39.716089       1 trace.go:205] Trace[383521653]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:57:38.323) (total time: 1392ms):
	Trace[383521653]: [1.392953815s] [1.392953815s] END
	I0814 09:57:39.716396       1 trace.go:205] Trace[384732503]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:57:38.323) (total time: 1393ms):
	Trace[384732503]: ---"Listing from storage done" 1393ms (09:57:00.716)
	Trace[384732503]: [1.393295026s] [1.393295026s] END
	I0814 09:57:39.718087       1 trace.go:205] Trace[1335615492]: "Get" url:/api/v1/nodes/default-k8s-different-port-20210814095040-6746,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:57:36.617) (total time: 3100ms):
	Trace[1335615492]: ---"About to write a response" 3098ms (09:57:00.715)
	Trace[1335615492]: [3.100859358s] [3.100859358s] END
	I0814 09:57:39.719191       1 trace.go:205] Trace[1060095040]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-zjjkn,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:57:36.622) (total time: 3097ms):
	Trace[1060095040]: ---"About to write a response" 3096ms (09:57:00.718)
	Trace[1060095040]: [3.097139245s] [3.097139245s] END
	
	* 
	* ==> kube-controller-manager [d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde] <==
	* I0814 09:57:29.113537       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:57:29.217918       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:57:29.218290       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:57:29.224701       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:57:29.224764       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0814 09:57:29.307018       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-2ms26"
	I0814 09:57:29.727668       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0814 09:57:29.737743       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.745620       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:57:29.752541       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.752882       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:57:29.761720       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0814 09:57:29.803705       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.803777       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:57:29.814808       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.819942       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:57:29.829759       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.830026       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.832833       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.832896       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.833281       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.833317       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:57:29.906023       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-54btm"
	I0814 09:57:29.908430       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-hjr7d"
	I0814 09:57:31.646181       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d] <==
	* I0814 09:57:27.593121       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:57:27.593170       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:57:27.593222       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:57:27.680556       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:57:27.680590       1 server_others.go:212] Using iptables Proxier.
	I0814 09:57:27.680606       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:57:27.680620       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:57:27.681009       1 server.go:643] Version: v1.21.3
	I0814 09:57:27.684922       1 config.go:224] Starting endpoint slice config controller
	I0814 09:57:27.685061       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0814 09:57:27.685234       1 config.go:315] Starting service config controller
	I0814 09:57:27.685242       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0814 09:57:27.705654       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:57:27.722530       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:57:27.801538       1 shared_informer.go:247] Caches are synced for service config 
	I0814 09:57:27.801620       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3] <==
	* I0814 09:57:10.819379       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:57:10.819419       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:57:10.819623       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:57:10.819686       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:57:10.820657       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:57:10.824237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:57:10.824293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:57:10.824359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:57:10.824388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:10.824411       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:57:10.824528       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:57:10.824536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:57:10.824662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:10.826134       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:57:10.826228       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:57:10.826420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:57:10.826687       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:10.826764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:11.671044       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:57:11.714556       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:11.791088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:57:11.833580       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:57:11.902766       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:57:11.905723       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0814 09:57:12.320512       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:52:08 UTC, end at Sat 2021-08-14 09:57:55 UTC. --
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:30.006019    4822 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk7dc\" (UniqueName: \"kubernetes.io/projected/2936fbc6-dc7a-429f-b4ae-fa739e5e2c42-kube-api-access-kk7dc\") pod \"kubernetes-dashboard-6fcdf4f6d-hjr7d\" (UID: \"2936fbc6-dc7a-429f-b4ae-fa739e5e2c42\") "
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409203    4822 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409265    4822 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409453    4822 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wbv6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2ms26_kube-system(0f0c284a-68cf-4545-835a-464713e03dfc): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409520    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2ms26" podUID=0f0c284a-68cf-4545-835a-464713e03dfc
	Aug 14 09:57:31 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:31.134570    4822 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 14 09:57:31 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:31.135408    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-2ms26" podUID=0f0c284a-68cf-4545-835a-464713e03dfc
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:41.223821    4822 scope.go:111] "RemoveContainer" containerID="a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:42.227218    4822 scope.go:111] "RemoveContainer" containerID="a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:42.227451    4822 scope.go:111] "RemoveContainer" containerID="6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:42.227766    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-54btm_kubernetes-dashboard(dfc36e40-fb64-41d8-a005-ab76555690d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-54btm" podUID=dfc36e40-fb64-41d8-a005-ab76555690d0
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: W0814 09:57:42.361181    4822 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddfc36e40-fb64-41d8-a005-ab76555690d0/a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06 WatchSource:0}: container "a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06" in namespace "k8s.io": not found
	Aug 14 09:57:43 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:43.230896    4822 scope.go:111] "RemoveContainer" containerID="6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	Aug 14 09:57:43 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:43.231274    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-54btm_kubernetes-dashboard(dfc36e40-fb64-41d8-a005-ab76555690d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-54btm" podUID=dfc36e40-fb64-41d8-a005-ab76555690d0
	Aug 14 09:57:43 default-k8s-different-port-20210814095040-6746 kubelet[4822]: W0814 09:57:43.866539    4822 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddfc36e40-fb64-41d8-a005-ab76555690d0/6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c WatchSource:0}: task 6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c not found: not found
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080169    4822 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080226    4822 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080365    4822 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wbv6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2ms26_kube-system(0f0c284a-68cf-4545-835a-464713e03dfc): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080423    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2ms26" podUID=0f0c284a-68cf-4545-835a-464713e03dfc
	Aug 14 09:57:49 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:49.919811    4822 scope.go:111] "RemoveContainer" containerID="6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	Aug 14 09:57:49 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:49.920261    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-54btm_kubernetes-dashboard(dfc36e40-fb64-41d8-a005-ab76555690d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-54btm" podUID=dfc36e40-fb64-41d8-a005-ab76555690d0
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:52.592691    4822 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526] <==
	* 2021/08/14 09:57:31 Starting overwatch
	2021/08/14 09:57:31 Using namespace: kubernetes-dashboard
	2021/08/14 09:57:31 Using in-cluster config to connect to apiserver
	2021/08/14 09:57:31 Using secret token for csrf signing
	2021/08/14 09:57:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:57:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:57:31 Successful initial request to the apiserver, version: v1.21.3
	2021/08/14 09:57:31 Generating JWE encryption key
	2021/08/14 09:57:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:57:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:57:32 Initializing JWE encryption key from synchronized object
	2021/08/14 09:57:32 Creating in-cluster Sidecar client
	2021/08/14 09:57:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:57:32 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557] <==
	* I0814 09:57:30.846872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:57:30.860613       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:57:30.860668       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:57:30.866955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:57:30.867063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210814095040-6746_e9f82a95-6a81-4f0d-a23c-89281c09e4f3!
	I0814 09:57:30.867812       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34300b54-b6f4-4572-b965-67fa08c77a62", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210814095040-6746_e9f82a95-6a81-4f0d-a23c-89281c09e4f3 became leader
	I0814 09:57:30.967510       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210814095040-6746_e9f82a95-6a81-4f0d-a23c-89281c09e4f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746: exit status 2 (331.703858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-2ms26
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 describe pod metrics-server-7c784ccb57-2ms26
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210814095040-6746 describe pod metrics-server-7c784ccb57-2ms26: exit status 1 (72.275024ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-2ms26" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210814095040-6746 describe pod metrics-server-7c784ccb57-2ms26: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210814095040-6746
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210814095040-6746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551",
	        "Created": "2021-08-14T09:50:41.680931933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250731,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-14T09:52:08.402850153Z",
	            "FinishedAt": "2021-08-14T09:52:06.144304231Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/hostname",
	        "HostsPath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/hosts",
	        "LogPath": "/var/lib/docker/containers/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551/981ed925d6734fa8bac53e718493f6164214f89114802b99c24824a8b0d8e551-json.log",
	        "Name": "/default-k8s-different-port-20210814095040-6746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210814095040-6746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210814095040-6746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532-init/diff:/var/lib/docker/overlay2/44293204ffcddab904fa39f43ac7c6e7ffe7ce16a314eee270b092f522cebd43/diff:/var/lib/docker/overlay2/d8341f611b86153e5f6cb362ab520c3ae36188ea6716f190fc0174ff1ea3ee74/diff:/var/lib/docker/overlay2/bd7d3c333112b94c560c1f759b3031dacd03064ccdc9df8e5358d8a645061331/diff:/var/lib/docker/overlay2/09e25c5f07d4475398fafae89532f1d953d96a76196aa84622658de28364fd3f/diff:/var/lib/docker/overlay2/2a3b6b58e5882d0ba0740b15836902b8ed1a5fb9d23887eb678e006c51dd73c7/diff:/var/lib/docker/overlay2/76ace14c33797e6813f2c4e08c8d912ecfd8fb23926788a228fa406899bb17fd/diff:/var/lib/docker/overlay2/b6c1cb0d4e012909f55658bcbc13333804f198f73fe55c89880463627df2a273/diff:/var/lib/docker/overlay2/32d72b1f852d4e6adf9606825d57744f289d1bd71f9e97c0c94e254c9b49a0a7/diff:/var/lib/docker/overlay2/83bfd21927e324006d812f85db5253c2fa26e904874ebe6eca654a31c3663b76/diff:/var/lib/docker/overlay2/09c644
86d30f3ce93a9c989d2320cab6117e38d8d14087dcc28b47b09417e0af/diff:/var/lib/docker/overlay2/07c465014f3b88377cc91b8d077258d8c0ecdcc186de832e2f804ac803f96bb6/diff:/var/lib/docker/overlay2/ef1da03dcb3fcd6903dc01358fd85a36f8acbece460a1be166b2189f4c9a890d/diff:/var/lib/docker/overlay2/06c9999c225f6979a474a4add4fdbe8a868a5d7bb2c4e0907f6f8c032f0dc3dc/diff:/var/lib/docker/overlay2/6727de022cf39e5df68d1735043e8761fb8f6a9a8e8f3940cc2d3bb6dd859fdc/diff:/var/lib/docker/overlay2/cd3abb7d0de10360ebcb7d54662cd79f92398959ca8add5f1a80f6fa75fac2fe/diff:/var/lib/docker/overlay2/5d9c6d8acdc0db40dfeb33b99cec5a84630be4548651da75930de46be0bada16/diff:/var/lib/docker/overlay2/0d83fd617ee858bc4b175e5d63e60389604823c74eadf9e7b094d684a3606936/diff:/var/lib/docker/overlay2/98e0eaf33dc37fae747406662d0b14e912065812887be7274a2c27b87105e0a7/diff:/var/lib/docker/overlay2/f30a9abd2c351bb9e974c8b070fb489a15669eb772c0a7692069196bde6d38c2/diff:/var/lib/docker/overlay2/542980593ba0e18478833840f8a01d93cd345671c3c627bebb6bfc610e24df96/diff:/var/lib/d
ocker/overlay2/5964e0aebfcd88775ca08769a5a0a50c474ded9c08c17cec0d5eb1e88470d8cc/diff:/var/lib/docker/overlay2/cb70cd4699e2d3a88d37760d4575d0b68dd6a2d571eb9bc00e4ea65334fa39d6/diff:/var/lib/docker/overlay2/d1b622693d005bfff88b41f898520d720897832f4740859a062a087528632a45/diff:/var/lib/docker/overlay2/93087667fcbed5997d90d232200d1c052c164d476435896fd420ac24d1479506/diff:/var/lib/docker/overlay2/0802356ccb344d298ae9401c44c29f71c98eac0b0304bd96a79110c16564fefa/diff:/var/lib/docker/overlay2/d7eea48b12fccaa4c4ffd048d5e70d9609d0a32f642eac39fbaafcaf8df8ee5e/diff:/var/lib/docker/overlay2/2f9d94bc10599fcc45fb8bed114c912ff657664f981c0da2bb8a3e02bddd1c06/diff:/var/lib/docker/overlay2/40acd190e2f5e2316bc19d17aed36b8a50a3be404a90bca58d26e6e939428c16/diff:/var/lib/docker/overlay2/02bd7a3b51ac7a3c3f9c89ace72c7f9790120e89f4628f197f1cfc9859623b55/diff:/var/lib/docker/overlay2/937c337b5c08153af0ca14a0f98e805223a44858531b0dcacdeffa5e7c9b9d5a/diff:/var/lib/docker/overlay2/c28ba46c40ee69f9a39b3c7e1bef20b56282cc8478c117546ad40889969
39c93/diff:/var/lib/docker/overlay2/2b30fea3d6a161389dc317d3bba6468e111f2782fc2de29399dbaff500217e0e/diff:/var/lib/docker/overlay2/fd1824b771ae21d235f0bd6186e3da121d02f12a0c98fb8c3205f4fa216420d3/diff:/var/lib/docker/overlay2/d1a43bd2c1485a2051100b28c50ca4afb530e7a9cace2b7ed1bb19098a8b1b6c/diff:/var/lib/docker/overlay2/e5626256f4126d2d314b1737c78f12ceabf819f05f933b8539d23c83ed360571/diff:/var/lib/docker/overlay2/0e28b1b6d42bc8ec33754e6a4d94556573199f71a1745d89b48ecf4e53c4b9d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532/merged",
	                "UpperDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532/diff",
	                "WorkDir": "/var/lib/docker/overlay2/355a850f4d9b2a6bb91c9408798730a37c2d1401dd463c0b8f807160147c2532/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210814095040-6746",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210814095040-6746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210814095040-6746",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210814095040-6746",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210814095040-6746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6885984f866462f51a690bbe5383d686ae40fd5953cb59fd00db1d47ffb40fbc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32954"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6885984f8664",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210814095040-6746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "981ed925d673"
	                    ],
	                    "NetworkID": "fcd9d5f352a71d72e683e953cda11a59709ddebb4388de429a5d199326a6eb94",
	                    "EndpointID": "a8a45f1062529330dec738132ba63556681fdd80b34dba7ff203c56603c04ef3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746: exit status 2 (358.087223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210814095040-6746 logs -n 25
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | no-preload-20210814094108-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:38 UTC | Sat, 14 Aug 2021 09:50:39 UTC |
	|         | no-preload-20210814094108-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20210814095039-6746      | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:39 UTC | Sat, 14 Aug 2021 09:50:40 UTC |
	|         | disable-driver-mounts-20210814095039-6746                  |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:45:12 UTC | Sat, 14 Aug 2021 09:50:56 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| -p      | embed-certs-20210814094325-6746                            | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:06 UTC | Sat, 14 Aug 2021 09:51:07 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:08 UTC | Sat, 14 Aug 2021 09:51:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:50:40 UTC | Sat, 14 Aug 2021 09:51:36 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:51:45 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:51:45 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:52:06 UTC | Sat, 14 Aug 2021 09:52:06 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:04 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210814094325-6746                | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:53:08 UTC |
	|         | embed-certs-20210814094325-6746                            |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:53:08 UTC | Sat, 14 Aug 2021 09:54:08 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:08 UTC | Sat, 14 Aug 2021 09:54:09 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:09 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:54:29 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210814095308-6746 --memory=2200            | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:54:29 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:04 UTC | Sat, 14 Aug 2021 09:55:04 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:29 UTC | Sat, 14 Aug 2021 09:55:32 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210814095308-6746                 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:32 UTC | Sat, 14 Aug 2021 09:55:33 UTC |
	|         | newest-cni-20210814095308-6746                             |                                                |         |         |                               |                               |
	| start   | -p auto-20210814093634-6746                                | auto-20210814093634-6746                       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:55:33 UTC | Sat, 14 Aug 2021 09:56:43 UTC |
	|         | --memory=2048                                              |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	| ssh     | -p auto-20210814093634-6746                                | auto-20210814093634-6746                       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:56:43 UTC | Sat, 14 Aug 2021 09:56:43 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |         |                               |                               |
	| delete  | -p auto-20210814093634-6746                                | auto-20210814093634-6746                       | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:56:53 UTC | Sat, 14 Aug 2021 09:56:56 UTC |
	| start   | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:52:06 UTC | Sat, 14 Aug 2021 09:57:41 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:57:51 UTC | Sat, 14 Aug 2021 09:57:52 UTC |
	|         | default-k8s-different-port-20210814095040-6746             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20210814095040-6746             | default-k8s-different-port-20210814095040-6746 | jenkins | v1.22.0 | Sat, 14 Aug 2021 09:57:54 UTC | Sat, 14 Aug 2021 09:57:55 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:56:56
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:56:56.952852  282733 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:56:56.952933  282733 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:56.952943  282733 out.go:311] Setting ErrFile to fd 2...
	I0814 09:56:56.952948  282733 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:56.953064  282733 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:56:56.953314  282733 out.go:305] Setting JSON to false
	I0814 09:56:56.991428  282733 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":5979,"bootTime":1628929038,"procs":249,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:56:56.991543  282733 start.go:121] virtualization: kvm guest
	I0814 09:56:56.994228  282733 out.go:177] * [custom-weave-20210814093636-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:56:56.995753  282733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:56:56.994374  282733 notify.go:169] Checking for updates...
	I0814 09:56:56.997263  282733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:56:56.998575  282733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:56:56.999879  282733 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:56:57.000361  282733 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:56:57.000448  282733 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:56:57.000519  282733 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:56:57.000554  282733 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:56:57.051337  282733 docker.go:132] docker version: linux-19.03.15
	I0814 09:56:57.051432  282733 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:56:57.135338  282733 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:56:57.089135632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:56:57.135446  282733 docker.go:244] overlay module found
	I0814 09:56:57.137780  282733 out.go:177] * Using the docker driver based on user configuration
	I0814 09:56:57.137807  282733 start.go:278] selected driver: docker
	I0814 09:56:57.137813  282733 start.go:751] validating driver "docker" against <nil>
	I0814 09:56:57.137834  282733 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:56:57.137882  282733 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:56:57.137902  282733 out.go:242] ! Your cgroup does not allow setting memory.
	I0814 09:56:57.139342  282733 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:56:57.140124  282733 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:56:57.221839  282733 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-14 09:56:57.178208852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:56:57.221970  282733 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:56:57.222147  282733 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:56:57.222174  282733 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0814 09:56:57.222192  282733 start_flags.go:272] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0814 09:56:57.222204  282733 start_flags.go:277] config:
	{Name:custom-weave-20210814093636-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:56:57.225204  282733 out.go:177] * Starting control plane node custom-weave-20210814093636-6746 in cluster custom-weave-20210814093636-6746
	I0814 09:56:57.225246  282733 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:56:57.226685  282733 out.go:177] * Pulling base image ...
	I0814 09:56:57.226724  282733 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:56:57.226763  282733 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:56:57.226779  282733 cache.go:56] Caching tarball of preloaded images
	I0814 09:56:57.226817  282733 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:56:57.226937  282733 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0814 09:56:57.226957  282733 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0814 09:56:57.227074  282733 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/config.json ...
	I0814 09:56:57.227109  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/config.json: {Name:mkba4376994c19173a224130a1ac43ffb40a0b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:56:57.321736  282733 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:56:57.321772  282733 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:56:57.321792  282733 cache.go:205] Successfully downloaded all kic artifacts
	I0814 09:56:57.321845  282733 start.go:313] acquiring machines lock for custom-weave-20210814093636-6746: {Name:mk8a34e7e0bd18f9f8d5595fe521fee684812b37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:56:57.321979  282733 start.go:317] acquired machines lock for "custom-weave-20210814093636-6746" in 107.945µs
	I0814 09:56:57.322011  282733 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210814093636-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:56:57.322109  282733 start.go:126] createHost starting for "" (driver="docker")
	I0814 09:56:58.189011  250455 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.435152346s)
	I0814 09:56:58.189096  250455 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0814 09:56:58.198874  250455 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0814 09:56:58.198933  250455 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:56:58.221441  250455 cri.go:76] found id: ""
	I0814 09:56:58.221508  250455 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:56:58.248974  250455 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:56:58.249022  250455 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:56:58.276981  250455 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:56:58.277025  250455 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:56:58.565330  250455 out.go:204]   - Generating certificates and keys ...
	I0814 09:56:59.462535  250455 out.go:204]   - Booting up control plane ...
	I0814 09:56:57.324903  282733 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0814 09:56:57.325198  282733 start.go:160] libmachine.API.Create for "custom-weave-20210814093636-6746" (driver="docker")
	I0814 09:56:57.325242  282733 client.go:168] LocalClient.Create starting
	I0814 09:56:57.325324  282733 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem
	I0814 09:56:57.325365  282733 main.go:130] libmachine: Decoding PEM data...
	I0814 09:56:57.325387  282733 main.go:130] libmachine: Parsing certificate...
	I0814 09:56:57.325533  282733 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem
	I0814 09:56:57.325558  282733 main.go:130] libmachine: Decoding PEM data...
	I0814 09:56:57.325579  282733 main.go:130] libmachine: Parsing certificate...
	I0814 09:56:57.325972  282733 cli_runner.go:115] Run: docker network inspect custom-weave-20210814093636-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0814 09:56:57.365149  282733 cli_runner.go:162] docker network inspect custom-weave-20210814093636-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0814 09:56:57.365227  282733 network_create.go:255] running [docker network inspect custom-weave-20210814093636-6746] to gather additional debugging logs...
	I0814 09:56:57.365253  282733 cli_runner.go:115] Run: docker network inspect custom-weave-20210814093636-6746
	W0814 09:56:57.403769  282733 cli_runner.go:162] docker network inspect custom-weave-20210814093636-6746 returned with exit code 1
	I0814 09:56:57.403798  282733 network_create.go:258] error running [docker network inspect custom-weave-20210814093636-6746]: docker network inspect custom-weave-20210814093636-6746: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20210814093636-6746
	I0814 09:56:57.403826  282733 network_create.go:260] output of [docker network inspect custom-weave-20210814093636-6746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20210814093636-6746
	
	** /stderr **
	I0814 09:56:57.403873  282733 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:56:57.443695  282733 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcd9d5f352a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fd:56:42:d1}}
	I0814 09:56:57.444831  282733 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00061e030] misses:0}
	I0814 09:56:57.444886  282733 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0814 09:56:57.444900  282733 network_create.go:106] attempt to create docker network custom-weave-20210814093636-6746 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0814 09:56:57.444949  282733 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210814093636-6746
	I0814 09:56:57.517061  282733 network_create.go:90] docker network custom-weave-20210814093636-6746 192.168.58.0/24 created
	I0814 09:56:57.517091  282733 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20210814093636-6746" container
	I0814 09:56:57.517209  282733 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0814 09:56:57.561425  282733 cli_runner.go:115] Run: docker volume create custom-weave-20210814093636-6746 --label name.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --label created_by.minikube.sigs.k8s.io=true
	I0814 09:56:57.604821  282733 oci.go:102] Successfully created a docker volume custom-weave-20210814093636-6746
	I0814 09:56:57.604920  282733 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210814093636-6746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --entrypoint /usr/bin/test -v custom-weave-20210814093636-6746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0814 09:56:58.406935  282733 oci.go:106] Successfully prepared a docker volume custom-weave-20210814093636-6746
	W0814 09:56:58.406995  282733 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0814 09:56:58.407005  282733 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0814 09:56:58.407006  282733 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:56:58.407040  282733 kic.go:179] Starting extracting preloaded images to volume ...
	I0814 09:56:58.407064  282733 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0814 09:56:58.407091  282733 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210814093636-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0814 09:56:58.495279  282733 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210814093636-6746 --name custom-weave-20210814093636-6746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210814093636-6746 --network custom-weave-20210814093636-6746 --ip 192.168.58.2 --volume custom-weave-20210814093636-6746:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0814 09:56:59.044706  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Running}}
	I0814 09:56:59.097474  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:56:59.159235  282733 cli_runner.go:115] Run: docker exec custom-weave-20210814093636-6746 stat /var/lib/dpkg/alternatives/iptables
	I0814 09:56:59.310241  282733 oci.go:278] the created container "custom-weave-20210814093636-6746" has a running status.
	I0814 09:56:59.310278  282733 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa...
	I0814 09:56:59.551124  282733 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0814 09:56:59.987572  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:00.030969  282733 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0814 09:57:00.030990  282733 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210814093636-6746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0814 09:57:02.834646  282733 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210814093636-6746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427486341s)
	I0814 09:57:02.834685  282733 kic.go:188] duration metric: took 4.427644 seconds to extract preloaded images to volume
	I0814 09:57:02.834765  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:02.873812  282733 machine.go:88] provisioning docker machine ...
	I0814 09:57:02.873847  282733 ubuntu.go:169] provisioning hostname "custom-weave-20210814093636-6746"
	I0814 09:57:02.873913  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:02.910725  282733 main.go:130] libmachine: Using SSH client type: native
	I0814 09:57:02.910963  282733 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I0814 09:57:02.910988  282733 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20210814093636-6746 && echo "custom-weave-20210814093636-6746" | sudo tee /etc/hostname
	I0814 09:57:03.093734  282733 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20210814093636-6746
	
	I0814 09:57:03.093800  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.134690  282733 main.go:130] libmachine: Using SSH client type: native
	I0814 09:57:03.134852  282733 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I0814 09:57:03.134878  282733 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20210814093636-6746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210814093636-6746/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20210814093636-6746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:57:03.260393  282733 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:57:03.260428  282733 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube}
	I0814 09:57:03.260450  282733 ubuntu.go:177] setting up certificates
	I0814 09:57:03.260459  282733 provision.go:83] configureAuth start
	I0814 09:57:03.260507  282733 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210814093636-6746
	I0814 09:57:03.299898  282733 provision.go:138] copyHostCerts
	I0814 09:57:03.299951  282733 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem, removing ...
	I0814 09:57:03.299958  282733 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem
	I0814 09:57:03.300017  282733 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.pem (1078 bytes)
	I0814 09:57:03.300102  282733 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem, removing ...
	I0814 09:57:03.300114  282733 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem
	I0814 09:57:03.300141  282733 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cert.pem (1123 bytes)
	I0814 09:57:03.300211  282733 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem, removing ...
	I0814 09:57:03.300220  282733 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem
	I0814 09:57:03.300241  282733 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/key.pem (1679 bytes)
	I0814 09:57:03.300282  282733 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20210814093636-6746 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210814093636-6746]
	I0814 09:57:03.445860  282733 provision.go:172] copyRemoteCerts
	I0814 09:57:03.445907  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:57:03.445946  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.485547  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:03.575579  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 09:57:03.592246  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0814 09:57:03.607659  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:57:03.623504  282733 provision.go:86] duration metric: configureAuth took 363.034675ms
	I0814 09:57:03.623522  282733 ubuntu.go:193] setting minikube options for container-runtime
	I0814 09:57:03.623672  282733 config.go:177] Loaded profile config "custom-weave-20210814093636-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:57:03.623683  282733 machine.go:91] provisioned docker machine in 749.85194ms
	I0814 09:57:03.623690  282733 client.go:171] LocalClient.Create took 6.298439787s
	I0814 09:57:03.623709  282733 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210814093636-6746" took 6.298512324s
	I0814 09:57:03.623721  282733 start.go:267] post-start starting for "custom-weave-20210814093636-6746" (driver="docker")
	I0814 09:57:03.623731  282733 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:57:03.623783  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:57:03.623829  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.662518  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:03.751889  282733 ssh_runner.go:149] Run: cat /etc/os-release
	I0814 09:57:03.754499  282733 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0814 09:57:03.754520  282733 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0814 09:57:03.754531  282733 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0814 09:57:03.754536  282733 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0814 09:57:03.754544  282733 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/addons for local assets ...
	I0814 09:57:03.754583  282733 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files for local assets ...
	I0814 09:57:03.754678  282733 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem -> 67462.pem in /etc/ssl/certs
	I0814 09:57:03.754818  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:57:03.761547  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:57:03.778521  282733 start.go:270] post-start completed in 154.78447ms
	I0814 09:57:03.778884  282733 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210814093636-6746
	I0814 09:57:03.817300  282733 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/config.json ...
	I0814 09:57:03.817501  282733 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:57:03.817539  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.856432  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:03.940575  282733 start.go:129] duration metric: createHost completed in 6.618453785s
	I0814 09:57:03.940602  282733 start.go:80] releasing machines lock for "custom-weave-20210814093636-6746", held for 6.618608448s
	I0814 09:57:03.940675  282733 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210814093636-6746
	I0814 09:57:03.978456  282733 ssh_runner.go:149] Run: systemctl --version
	I0814 09:57:03.978510  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:03.978523  282733 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0814 09:57:03.978613  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:04.019693  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:04.020044  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:04.108287  282733 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0814 09:57:04.117763  282733 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0814 09:57:04.134850  282733 docker.go:153] disabling docker service ...
	I0814 09:57:04.134892  282733 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0814 09:57:04.150707  282733 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0814 09:57:04.158955  282733 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0814 09:57:04.220536  282733 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0814 09:57:04.279074  282733 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0814 09:57:04.287628  282733 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:57:04.299359  282733 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0814 09:57:04.312669  282733 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:57:04.319068  282733 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 09:57:04.319116  282733 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0814 09:57:04.325942  282733 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:57:04.332327  282733 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0814 09:57:04.386037  282733 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0814 09:57:04.450836  282733 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0814 09:57:04.450903  282733 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0814 09:57:04.454199  282733 start.go:413] Will wait 60s for crictl version
	I0814 09:57:04.454251  282733 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:57:04.477022  282733 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-14T09:57:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0814 09:57:13.511644  250455 out.go:204]   - Configuring RBAC rules ...
	I0814 09:57:13.923248  250455 cni.go:93] Creating CNI manager for ""
	I0814 09:57:13.923271  250455 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:57:15.526541  282733 ssh_runner.go:149] Run: sudo crictl version
	I0814 09:57:15.614160  282733 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0814 09:57:15.614216  282733 ssh_runner.go:149] Run: containerd --version
	I0814 09:57:15.636518  282733 ssh_runner.go:149] Run: containerd --version
	I0814 09:57:13.924971  250455 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 09:57:13.925044  250455 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0814 09:57:13.928570  250455 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:57:13.928591  250455 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0814 09:57:13.941040  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:57:14.170210  250455 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:57:14.170256  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:14.170293  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=default-k8s-different-port-20210814095040-6746 minikube.k8s.io/updated_at=2021_08_14T09_57_14_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:14.186307  250455 ops.go:34] apiserver oom_adj: -16
	I0814 09:57:14.302328  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:14.880772  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:15.381100  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:15.880902  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:16.380680  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:15.659220  282733 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0814 09:57:15.659305  282733 cli_runner.go:115] Run: docker network inspect custom-weave-20210814093636-6746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0814 09:57:15.700463  282733 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0814 09:57:15.703756  282733 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:57:15.712649  282733 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:57:15.712716  282733 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:57:15.735226  282733 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:57:15.735242  282733 containerd.go:517] Images already preloaded, skipping extraction
	I0814 09:57:15.735277  282733 ssh_runner.go:149] Run: sudo crictl images --output json
	I0814 09:57:15.755650  282733 containerd.go:613] all images are preloaded for containerd runtime.
	I0814 09:57:15.755667  282733 cache_images.go:74] Images are preloaded, skipping loading
	I0814 09:57:15.755708  282733 ssh_runner.go:149] Run: sudo crictl info
	I0814 09:57:15.776315  282733 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0814 09:57:15.776343  282733 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0814 09:57:15.776355  282733 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210814093636-6746 NodeName:custom-weave-20210814093636-6746 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0814 09:57:15.776470  282733 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "custom-weave-20210814093636-6746"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:57:15.776545  282733 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20210814093636-6746 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0814 09:57:15.776582  282733 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0814 09:57:15.782969  282733 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:57:15.783027  282733 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:57:15.789022  282733 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (546 bytes)
	I0814 09:57:15.800510  282733 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:57:15.811775  282733 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2082 bytes)
	I0814 09:57:15.823033  282733 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0814 09:57:15.825569  282733 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:57:15.834080  282733 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746 for IP: 192.168.58.2
	I0814 09:57:15.834123  282733 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key
	I0814 09:57:15.834140  282733 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key
	I0814 09:57:15.834183  282733 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.key
	I0814 09:57:15.834194  282733 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt with IP's: []
	I0814 09:57:16.025555  282733 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt ...
	I0814 09:57:16.025580  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: {Name:mk3a7b86b266f1b42b6ee6625378c03e373788ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.025769  282733 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.key ...
	I0814 09:57:16.025786  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.key: {Name:mk6dc3f7832c9b7557e57f6e827aea7c6af8ba28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.025890  282733 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041
	I0814 09:57:16.025901  282733 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0814 09:57:16.192301  282733 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041 ...
	I0814 09:57:16.192332  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041: {Name:mkbc64c1851ab5694e391aa9b9fee02fcc097261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.192506  282733 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041 ...
	I0814 09:57:16.192522  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041: {Name:mke63e08f1a1ec5808cd3f6054f1a872043945b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.192617  282733 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt
	I0814 09:57:16.192708  282733 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key
	I0814 09:57:16.192790  282733 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key
	I0814 09:57:16.192819  282733 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt with IP's: []
	I0814 09:57:16.466484  282733 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt ...
	I0814 09:57:16.466509  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt: {Name:mk15d3e1eab26da19f8f832875268908ff89a17f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.466662  282733 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key ...
	I0814 09:57:16.466676  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key: {Name:mk79da5996dabdc5d072ddc40c197548e809f24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:16.466845  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem (1338 bytes)
	W0814 09:57:16.466916  282733 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746_empty.pem, impossibly tiny 0 bytes
	I0814 09:57:16.466928  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 09:57:16.466953  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/ca.pem (1078 bytes)
	I0814 09:57:16.466976  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:57:16.466999  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/key.pem (1679 bytes)
	I0814 09:57:16.467044  282733 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem (1708 bytes)
	I0814 09:57:16.467987  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0814 09:57:16.484684  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:57:16.500174  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:57:16.515342  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 09:57:16.530218  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:57:16.545318  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0814 09:57:16.560434  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:57:16.575191  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 09:57:16.590133  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:57:16.604812  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/certs/6746.pem --> /usr/share/ca-certificates/6746.pem (1338 bytes)
	I0814 09:57:16.619537  282733 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/ssl/certs/67462.pem --> /usr/share/ca-certificates/67462.pem (1708 bytes)
	I0814 09:57:16.634617  282733 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:57:16.645623  282733 ssh_runner.go:149] Run: openssl version
	I0814 09:57:16.650009  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:57:16.656351  282733 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:57:16.659091  282733 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 14 09:05 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:57:16.659131  282733 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:57:16.663354  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:57:16.669677  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6746.pem && ln -fs /usr/share/ca-certificates/6746.pem /etc/ssl/certs/6746.pem"
	I0814 09:57:16.675936  282733 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6746.pem
	I0814 09:57:16.678661  282733 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 14 09:10 /usr/share/ca-certificates/6746.pem
	I0814 09:57:16.678700  282733 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6746.pem
	I0814 09:57:16.682900  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6746.pem /etc/ssl/certs/51391683.0"
	I0814 09:57:16.689252  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67462.pem && ln -fs /usr/share/ca-certificates/67462.pem /etc/ssl/certs/67462.pem"
	I0814 09:57:16.695761  282733 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/67462.pem
	I0814 09:57:16.698452  282733 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 14 09:10 /usr/share/ca-certificates/67462.pem
	I0814 09:57:16.698494  282733 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67462.pem
	I0814 09:57:16.702666  282733 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67462.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:57:16.708960  282733 kubeadm.go:390] StartCluster: {Name:custom-weave-20210814093636-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210814093636-6746 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:57:16.709045  282733 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0814 09:57:16.709088  282733 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 09:57:16.732120  282733 cri.go:76] found id: ""
	I0814 09:57:16.732197  282733 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:57:16.738334  282733 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:57:16.744477  282733 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0814 09:57:16.744512  282733 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:57:16.750517  282733 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:57:16.750553  282733 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0814 09:57:16.880565  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:17.380815  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:17.881015  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:18.380420  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:18.881004  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:19.380678  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:19.880259  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:20.381147  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:20.881055  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:21.380516  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:21.880505  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:22.381110  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:22.880665  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:23.380542  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:23.880870  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:24.380779  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:24.880701  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:25.380927  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:25.881137  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:26.381040  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:26.881030  250455 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:26.979312  250455 kubeadm.go:985] duration metric: took 12.809102419s to wait for elevateKubeSystemPrivileges.
	I0814 09:57:26.979339  250455 kubeadm.go:392] StartCluster complete in 5m2.547838589s
	I0814 09:57:26.979355  250455 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:26.979427  250455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:57:26.980036  250455 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:27.496347  250455 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210814095040-6746" rescaled to 1
	I0814 09:57:27.496422  250455 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:57:27.496452  250455 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:57:27.498190  250455 out.go:177] * Verifying Kubernetes components...
	I0814 09:57:27.498249  250455 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:27.496632  250455 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0814 09:57:27.496855  250455 config.go:177] Loaded profile config "default-k8s-different-port-20210814095040-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:57:27.498326  250455 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498336  250455 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498345  250455 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498355  250455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.498363  250455 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.498375  250455 addons.go:147] addon metrics-server should already be in state true
	I0814 09:57:27.498411  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.498348  250455 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.498683  250455 addons.go:147] addon dashboard should already be in state true
	I0814 09:57:27.498703  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.498716  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.498948  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.498328  250455 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:27.499129  250455 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.499140  250455 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:57:27.499168  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.499238  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.499649  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.591135  250455 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210814095040-6746"
	W0814 09:57:27.591165  250455 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:57:27.591195  250455 host.go:66] Checking if "default-k8s-different-port-20210814095040-6746" exists ...
	I0814 09:57:27.591734  250455 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210814095040-6746 --format={{.State.Status}}
	I0814 09:57:27.594700  250455 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0814 09:57:27.596282  250455 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0814 09:57:27.596336  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 09:57:27.596350  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0814 09:57:27.597812  250455 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0814 09:57:27.597865  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0814 09:57:27.597884  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0814 09:57:27.596396  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.597929  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.599764  250455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:57:27.599884  250455 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:57:27.599902  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:57:27.599954  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.665084  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.669545  250455 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210814095040-6746" to be "Ready" ...
	I0814 09:57:27.673504  250455 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:57:27.675654  250455 node_ready.go:49] node "default-k8s-different-port-20210814095040-6746" has status "Ready":"True"
	I0814 09:57:27.675673  250455 node_ready.go:38] duration metric: took 6.100787ms waiting for node "default-k8s-different-port-20210814095040-6746" to be "Ready" ...
	I0814 09:57:27.675684  250455 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:57:27.678615  250455 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:57:27.678634  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:57:27.678689  250455 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210814095040-6746
	I0814 09:57:27.682437  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.682707  250455 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-psntz" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:27.727536  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.743432  250455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/default-k8s-different-port-20210814095040-6746/id_rsa Username:docker}
	I0814 09:57:27.848313  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 09:57:27.848337  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0814 09:57:27.854391  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0814 09:57:27.854411  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0814 09:57:27.923471  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0814 09:57:27.923548  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0814 09:57:27.925638  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 09:57:27.925683  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0814 09:57:27.941227  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:57:28.005583  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:57:28.013191  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0814 09:57:28.013221  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0814 09:57:28.036868  250455 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:57:28.036897  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0814 09:57:28.134690  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0814 09:57:28.134715  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0814 09:57:28.138410  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 09:57:28.221543  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0814 09:57:28.221566  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0814 09:57:28.336057  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0814 09:57:28.336082  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0814 09:57:28.402988  250455 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0814 09:57:28.438766  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0814 09:57:28.438793  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0814 09:57:28.518405  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0814 09:57:28.518431  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0814 09:57:28.609860  250455 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:57:28.609885  250455 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0814 09:57:28.628049  250455 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 09:57:29.524651  250455 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.519031606s)
	I0814 09:57:29.602043  250455 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.463594755s)
	I0814 09:57:29.602084  250455 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210814095040-6746"
	I0814 09:57:29.713541  250455 pod_ready.go:102] pod "coredns-558bd4d5db-psntz" in "kube-system" namespace has status "Ready":"False"
	I0814 09:57:30.127247  250455 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.499142253s)
	I0814 09:57:30.129123  250455 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0814 09:57:30.129152  250455 addons.go:344] enableAddons completed in 2.632529919s
	I0814 09:57:31.211525  250455 pod_ready.go:97] error getting pod "coredns-558bd4d5db-psntz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-psntz" not found
	I0814 09:57:31.211556  250455 pod_ready.go:81] duration metric: took 3.528824609s waiting for pod "coredns-558bd4d5db-psntz" in "kube-system" namespace to be "Ready" ...
	E0814 09:57:31.211569  250455 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-psntz" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-psntz" not found
	I0814 09:57:31.211578  250455 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:34.023009  250455 pod_ready.go:102] pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace has status "Ready":"False"
	I0814 09:57:41.180448  282733 out.go:204]   - Generating certificates and keys ...
	I0814 09:57:41.183353  282733 out.go:204]   - Booting up control plane ...
	I0814 09:57:41.185840  282733 out.go:204]   - Configuring RBAC rules ...
	I0814 09:57:41.187901  282733 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0814 09:57:39.718572  250455 pod_ready.go:102] pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace has status "Ready":"False"
	I0814 09:57:40.222423  250455 pod_ready.go:92] pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.222453  250455 pod_ready.go:81] duration metric: took 9.010866627s waiting for pod "coredns-558bd4d5db-zjjkn" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.222466  250455 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.226690  250455 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.226711  250455 pod_ready.go:81] duration metric: took 4.234825ms waiting for pod "etcd-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.226723  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.230822  250455 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.230839  250455 pod_ready.go:81] duration metric: took 4.107139ms waiting for pod "kube-apiserver-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.230851  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.235010  250455 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.235026  250455 pod_ready.go:81] duration metric: took 4.165899ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.235038  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-klrbg" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.239012  250455 pod_ready.go:92] pod "kube-proxy-klrbg" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.239029  250455 pod_ready.go:81] duration metric: took 3.983115ms waiting for pod "kube-proxy-klrbg" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.239040  250455 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.620556  250455 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace has status "Ready":"True"
	I0814 09:57:40.620576  250455 pod_ready.go:81] duration metric: took 381.52642ms waiting for pod "kube-scheduler-default-k8s-different-port-20210814095040-6746" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:40.620587  250455 pod_ready.go:38] duration metric: took 12.944887405s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:57:40.620609  250455 api_server.go:50] waiting for apiserver process to appear ...
	I0814 09:57:40.620655  250455 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:57:40.644916  250455 api_server.go:70] duration metric: took 13.148457169s to wait for apiserver process to appear ...
	I0814 09:57:40.644941  250455 api_server.go:86] waiting for apiserver healthz status ...
	I0814 09:57:40.644952  250455 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0814 09:57:40.649438  250455 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0814 09:57:40.650227  250455 api_server.go:139] control plane version: v1.21.3
	I0814 09:57:40.650250  250455 api_server.go:129] duration metric: took 5.303285ms to wait for apiserver health ...
	I0814 09:57:40.650259  250455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 09:57:40.823310  250455 system_pods.go:59] 9 kube-system pods found
	I0814 09:57:40.823342  250455 system_pods.go:61] "coredns-558bd4d5db-zjjkn" [50cc162c-7c79-4bcb-a514-12fbea928898] Running
	I0814 09:57:40.823350  250455 system_pods.go:61] "etcd-default-k8s-different-port-20210814095040-6746" [c73a55e6-808a-4f05-85fb-040eb29a0cfa] Running
	I0814 09:57:40.823355  250455 system_pods.go:61] "kindnet-9zklk" [6f6c319c-8cf6-45c5-bba1-3a5999ff9a0e] Running
	I0814 09:57:40.823361  250455 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210814095040-6746" [df9f3575-c3f6-4921-8412-587a5aad2918] Running
	I0814 09:57:40.823372  250455 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210814095040-6746" [bf84559d-78a0-467e-8361-cf4b0badb18e] Running
	I0814 09:57:40.823383  250455 system_pods.go:61] "kube-proxy-klrbg" [18dba609-fb6b-4895-aca7-2d94942571f6] Running
	I0814 09:57:40.823390  250455 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210814095040-6746" [0a96c1f2-97cd-468f-9b5c-55bd7c0ad18c] Running
	I0814 09:57:40.823404  250455 system_pods.go:61] "metrics-server-7c784ccb57-2ms26" [0f0c284a-68cf-4545-835a-464713e03dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:57:40.823416  250455 system_pods.go:61] "storage-provisioner" [c822637b-9c4e-48fa-ba25-77aeb1c4f4ad] Running
	I0814 09:57:40.823428  250455 system_pods.go:74] duration metric: took 173.162554ms to wait for pod list to return data ...
	I0814 09:57:40.823439  250455 default_sa.go:34] waiting for default service account to be created ...
	I0814 09:57:41.021831  250455 default_sa.go:45] found service account: "default"
	I0814 09:57:41.021855  250455 default_sa.go:55] duration metric: took 198.408847ms for default service account to be created ...
	I0814 09:57:41.021865  250455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 09:57:41.222890  250455 system_pods.go:86] 9 kube-system pods found
	I0814 09:57:41.222916  250455 system_pods.go:89] "coredns-558bd4d5db-zjjkn" [50cc162c-7c79-4bcb-a514-12fbea928898] Running
	I0814 09:57:41.222925  250455 system_pods.go:89] "etcd-default-k8s-different-port-20210814095040-6746" [c73a55e6-808a-4f05-85fb-040eb29a0cfa] Running
	I0814 09:57:41.222932  250455 system_pods.go:89] "kindnet-9zklk" [6f6c319c-8cf6-45c5-bba1-3a5999ff9a0e] Running
	I0814 09:57:41.222939  250455 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210814095040-6746" [df9f3575-c3f6-4921-8412-587a5aad2918] Running
	I0814 09:57:41.222950  250455 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210814095040-6746" [bf84559d-78a0-467e-8361-cf4b0badb18e] Running
	I0814 09:57:41.222957  250455 system_pods.go:89] "kube-proxy-klrbg" [18dba609-fb6b-4895-aca7-2d94942571f6] Running
	I0814 09:57:41.222967  250455 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210814095040-6746" [0a96c1f2-97cd-468f-9b5c-55bd7c0ad18c] Running
	I0814 09:57:41.222981  250455 system_pods.go:89] "metrics-server-7c784ccb57-2ms26" [0f0c284a-68cf-4545-835a-464713e03dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 09:57:41.222991  250455 system_pods.go:89] "storage-provisioner" [c822637b-9c4e-48fa-ba25-77aeb1c4f4ad] Running
	I0814 09:57:41.223003  250455 system_pods.go:126] duration metric: took 201.132399ms to wait for k8s-apps to be running ...
	I0814 09:57:41.223013  250455 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 09:57:41.223060  250455 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:41.236368  250455 system_svc.go:56] duration metric: took 13.34901ms WaitForService to wait for kubelet.
	I0814 09:57:41.236397  250455 kubeadm.go:547] duration metric: took 13.7399425s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0814 09:57:41.236423  250455 node_conditions.go:102] verifying NodePressure condition ...
	I0814 09:57:41.420718  250455 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0814 09:57:41.420740  250455 node_conditions.go:123] node cpu capacity is 8
	I0814 09:57:41.420750  250455 node_conditions.go:105] duration metric: took 184.321757ms to run NodePressure ...
	I0814 09:57:41.420760  250455 start.go:231] waiting for startup goroutines ...
	I0814 09:57:41.486511  250455 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0814 09:57:41.488533  250455 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210814095040-6746" cluster and "default" namespace by default
	I0814 09:57:41.189297  282733 out.go:177] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0814 09:57:41.189345  282733 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0814 09:57:41.189393  282733 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0814 09:57:41.192496  282733 ssh_runner.go:306] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0814 09:57:41.192516  282733 ssh_runner.go:316] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0814 09:57:41.208661  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 09:57:41.673507  282733 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:57:41.673570  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:41.673570  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969 minikube.k8s.io/name=custom-weave-20210814093636-6746 minikube.k8s.io/updated_at=2021_08_14T09_57_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:41.753402  282733 ops.go:34] apiserver oom_adj: -16
	I0814 09:57:41.753417  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:42.333652  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:42.833306  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:43.333408  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:43.834059  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:44.333958  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:44.834039  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:45.333998  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:45.834028  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:46.333405  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:46.833848  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:47.333994  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:47.833870  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:48.333432  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:48.834179  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:49.333581  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:49.833252  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:50.333209  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:50.834002  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:51.333661  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:51.833244  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:52.333474  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:52.833971  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:53.333877  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:53.833352  282733 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:57:53.907593  282733 kubeadm.go:985] duration metric: took 12.234091569s to wait for elevateKubeSystemPrivileges.
	I0814 09:57:53.907642  282733 kubeadm.go:392] StartCluster complete in 37.198685034s
	I0814 09:57:53.907663  282733 settings.go:142] acquiring lock: {Name:mkcd5b822e34f8a2a9e68b3a16adb8fe891a036f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:53.907758  282733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:57:53.911152  282733 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig: {Name:mkd1474ae092084e4d46ed204465553642d61d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:54.433928  282733 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20210814093636-6746" rescaled to 1
	I0814 09:57:54.433981  282733 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0814 09:57:54.436677  282733 out.go:177] * Verifying Kubernetes components...
	I0814 09:57:54.436729  282733 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:57:54.434039  282733 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 09:57:54.434070  282733 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0814 09:57:54.436859  282733 addons.go:59] Setting storage-provisioner=true in profile "custom-weave-20210814093636-6746"
	I0814 09:57:54.436883  282733 addons.go:135] Setting addon storage-provisioner=true in "custom-weave-20210814093636-6746"
	W0814 09:57:54.436890  282733 addons.go:147] addon storage-provisioner should already be in state true
	I0814 09:57:54.434201  282733 config.go:177] Loaded profile config "custom-weave-20210814093636-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:57:54.436923  282733 host.go:66] Checking if "custom-weave-20210814093636-6746" exists ...
	I0814 09:57:54.436921  282733 addons.go:59] Setting default-storageclass=true in profile "custom-weave-20210814093636-6746"
	I0814 09:57:54.436954  282733 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20210814093636-6746"
	I0814 09:57:54.437268  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:54.437444  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:54.498239  282733 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:57:54.498400  282733 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:57:54.498417  282733 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:57:54.498477  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:54.504029  282733 addons.go:135] Setting addon default-storageclass=true in "custom-weave-20210814093636-6746"
	W0814 09:57:54.504058  282733 addons.go:147] addon default-storageclass should already be in state true
	I0814 09:57:54.504093  282733 host.go:66] Checking if "custom-weave-20210814093636-6746" exists ...
	I0814 09:57:54.504645  282733 cli_runner.go:115] Run: docker container inspect custom-weave-20210814093636-6746 --format={{.State.Status}}
	I0814 09:57:54.540303  282733 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 09:57:54.542628  282733 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20210814093636-6746" to be "Ready" ...
	I0814 09:57:54.547173  282733 node_ready.go:49] node "custom-weave-20210814093636-6746" has status "Ready":"True"
	I0814 09:57:54.547283  282733 node_ready.go:38] duration metric: took 4.623779ms waiting for node "custom-weave-20210814093636-6746" to be "Ready" ...
	I0814 09:57:54.547316  282733 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 09:57:54.560458  282733 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-ptstj" in "kube-system" namespace to be "Ready" ...
	I0814 09:57:54.564099  282733 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:57:54.564119  282733 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:57:54.564180  282733 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210814093636-6746
	I0814 09:57:54.566890  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:54.608635  282733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/custom-weave-20210814093636-6746/id_rsa Username:docker}
	I0814 09:57:54.833922  282733 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:57:54.840983  282733 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:57:55.137976  282733 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	6c854f8b0cd1a       523cad1a4df73       15 seconds ago      Exited              dashboard-metrics-scraper   1                   54f901300abf8
	9d1091049f869       9a07b5b4bfac0       25 seconds ago      Running             kubernetes-dashboard        0                   d6fcb757f64e9
	82e9e90eeb526       6e38f40d628db       26 seconds ago      Running             storage-provisioner         0                   ef0195e898ea1
	0098384a3b4ed       296a6d5035e2d       28 seconds ago      Running             coredns                     0                   a0ebd929835ad
	19c2862b1cff8       6de166512aa22       29 seconds ago      Running             kindnet-cni                 0                   31cc2c576d9a6
	f0f81c09afef0       adb2816ea823a       29 seconds ago      Running             kube-proxy                  0                   586824e601bb5
	a49b6961ce490       6be0dc1302e30       50 seconds ago      Running             kube-scheduler              0                   844a229d51e27
	ccaca53fc4432       0369cf4303ffd       50 seconds ago      Running             etcd                        0                   e9c15c0af872a
	e79c62702ba4f       3d174f00aa39e       50 seconds ago      Running             kube-apiserver              0                   38fa574413e2e
	d1f9978c5866c       bc2bb319a7038       50 seconds ago      Running             kube-controller-manager     0                   81c3e2185d7c1
	
	* 
	* ==> containerd <==
	* -- Logs begin at Sat 2021-08-14 09:52:08 UTC, end at Sat 2021-08-14 09:57:57 UTC. --
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.804707188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.804940948Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.806715419Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.837037262Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:40 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:40.837509849Z" level=info msg="StartContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.016938952Z" level=info msg="StartContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\" returns successfully"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.057166967Z" level=info msg="Finish piping stderr of container \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.057195766Z" level=info msg="Finish piping stdout of container \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.058575673Z" level=info msg="TaskExit event &TaskExit{ContainerID:a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06,ID:a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06,Pid:6324,ExitStatus:1,ExitedAt:2021-08-14 09:57:41.058354254 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.101787134Z" level=info msg="shim disconnected" id=a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.101869058Z" level=error msg="copy shim log" error="read /proc/self/fd/138: file already closed"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.226018904Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.261395257Z" level=info msg="CreateContainer within sandbox \"54f901300abf8676fc6d92298dd5f41895218b2d803c4ad036e8cafc010a3801\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.261888663Z" level=info msg="StartContainer for \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.427034214Z" level=info msg="StartContainer for \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\" returns successfully"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.473230092Z" level=info msg="Finish piping stdout of container \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.473287946Z" level=info msg="Finish piping stderr of container \"6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c\""
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.474045099Z" level=info msg="TaskExit event &TaskExit{ContainerID:6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c,ID:6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c,Pid:6393,ExitStatus:1,ExitedAt:2021-08-14 09:57:41.473801024 +0000 UTC,XXX_unrecognized:[],}"
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.529700931Z" level=info msg="shim disconnected" id=6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:41.529790521Z" level=error msg="copy shim log" error="read /proc/self/fd/138: file already closed"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:42.228137269Z" level=info msg="RemoveContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\""
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:42.233292081Z" level=info msg="RemoveContainer for \"a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06\" returns successfully"
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:44.024197104Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:44.078650284Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" host=fake.domain
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 containerd[336]: time="2021-08-14T09:57:44.079927287Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host"
	
	* 
	* ==> coredns [0098384a3b4ed2ad334cf0768e054a4de561244764ebda682998d0ee2d1f6918] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210814095040-6746
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20210814095040-6746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3c4d0455dfed89650fdf54f9f70d551912b4969
	                    minikube.k8s.io/name=default-k8s-different-port-20210814095040-6746
	                    minikube.k8s.io/updated_at=2021_08_14T09_57_14_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Aug 2021 09:57:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20210814095040-6746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Aug 2021 09:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Aug 2021 09:57:49 +0000   Sat, 14 Aug 2021 09:57:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20210814095040-6746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                18f93bb6-a3ab-4de6-8ec2-f7bfb51bb31f
	  Boot ID:                    6b575b39-c337-47ac-88d9-ba67a5255a75
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-zjjkn                                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     30s
	  kube-system                 etcd-default-k8s-different-port-20210814095040-6746                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38s
	  kube-system                 kindnet-9zklk                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      31s
	  kube-system                 kube-apiserver-default-k8s-different-port-20210814095040-6746             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20210814095040-6746    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-proxy-klrbg                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-scheduler-default-k8s-different-port-20210814095040-6746             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 metrics-server-7c784ccb57-2ms26                                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         28s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-54btm                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-hjr7d                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  52s (x4 over 52s)  kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x4 over 52s)  kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x4 over 52s)  kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                31s                kubelet     Node default-k8s-different-port-20210814095040-6746 status is now: NodeReady
	  Normal  Starting                 30s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000025] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fcd9d5f352a7
	[  +0.000002] ll header: 00000000: 02 42 fd 56 42 d1 02 42 c0 a8 31 02 08 00        .B.VB..B..1...
	[Aug14 09:53] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:54] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:55] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:56] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth7345da06
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 92 45 4c d4 7c ac 08 06        .......EL.|...
	[  +3.039718] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth23b7056c
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff da 36 50 a4 55 3b 08 06        .......6P.U;..
	[ +14.868976] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug14 09:57] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethf06170b5
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5e 2a b3 8e d4 88 08 06        ......^*......
	[  +2.040503] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb67aee0d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff a6 9b ec 13 d9 b6 08 06        ..............
	[  +0.704016] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev veth708a0835
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 86 ba f1 19 7b e5 08 06        ..........{...
	[  +0.299880] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth6d17692c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 7e 73 53 da e8 08 06        .......~sS....
	[ +23.664383] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff ee 66 27 07 a4 b6 08 06        .......f'.....
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff ee 66 27 07 a4 b6 08 06        .......f'.....
	
	* 
	* ==> etcd [ccaca53fc4432841c3f6c9dc7043251f248f816ff3227fd9cc6ac9eb27e7c371] <==
	* 2021-08-14 09:57:07.306983 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-14 09:57:07.307052 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-14 09:57:21.433489 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:57:30.840860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:57:34.018467 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (800.716482ms) to execute
	2021-08-14 09:57:34.018574 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2566" took too long (862.180967ms) to execute
	2021-08-14 09:57:35.273417 W | wal: sync duration of 1.246803688s, expected less than 1s
	2021-08-14 09:57:35.273739 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.220497259s) to execute
	2021-08-14 09:57:35.274091 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (1.055886126s) to execute
	2021-08-14 09:57:35.274654 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1144" took too long (387.97457ms) to execute
	2021-08-14 09:57:35.274689 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-default-k8s-different-port-20210814095040-6746\" " with result "range_response_count:1 size:5282" took too long (1.245119574s) to execute
	2021-08-14 09:57:36.613195 W | wal: sync duration of 1.072608169s, expected less than 1s
	2021-08-14 09:57:36.614034 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (896.441878ms) to execute
	2021-08-14 09:57:38.053672 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000074231s) to execute
	WARNING: 2021/08/14 09:57:38 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-14 09:57:38.575258 W | wal: sync duration of 1.961929212s, expected less than 1s
	2021-08-14 09:57:39.712059 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.651876595s) to execute
	2021-08-14 09:57:39.712497 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.307445228s) to execute
	2021-08-14 09:57:39.712760 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1144" took too long (2.426311943s) to execute
	2021-08-14 09:57:39.712852 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5067" took too long (1.389317314s) to execute
	2021-08-14 09:57:39.712973 W | etcdserver: read-only range request "key:\"/registry/minions/default-k8s-different-port-20210814095040-6746\" " with result "range_response_count:1 size:5067" took too long (3.095121193s) to execute
	2021-08-14 09:57:39.713398 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-zjjkn\" " with result "range_response_count:1 size:4480" took too long (3.091022166s) to execute
	2021-08-14 09:57:39.713655 W | etcdserver: request "header:<ID:8128006959887004037 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20210814095040-6746\" mod_revision:492 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20210814095040-6746\" value_size:645 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-different-port-20210814095040-6746\" > >>" with result "size:16" took too long (175.862849ms) to execute
	2021-08-14 09:57:40.841109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-14 09:57:50.841460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  09:57:57 up  1:40,  0 users,  load average: 3.02, 2.04, 1.83
	Linux default-k8s-different-port-20210814095040-6746 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e79c62702ba4f72c33e44c8f36cea47a1609d80d5ce7e8aee62264d4ca3e2e06] <==
	* Trace[926388937]: ---"Transaction committed" 1333ms (09:57:00.616)
	Trace[926388937]: [1.336460324s] [1.336460324s] END
	I0814 09:57:36.616543       1 trace.go:205] Trace[1163206533]: "Patch" url:/api/v1/namespaces/kube-system/pods/etcd-default-k8s-different-port-20210814095040-6746/status,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:57:35.279) (total time: 1336ms):
	Trace[1163206533]: ---"Object stored in database" 1334ms (09:57:00.616)
	Trace[1163206533]: [1.336880178s] [1.336880178s] END
	I0814 09:57:38.576856       1 trace.go:205] Trace[1835893352]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:57:38.060) (total time: 516ms):
	Trace[1835893352]: ---"Object stored in database" 515ms (09:57:00.576)
	Trace[1835893352]: [516.030459ms] [516.030459ms] END
	I0814 09:57:39.205281       1 client.go:360] parsed scheme: "passthrough"
	I0814 09:57:39.205324       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0814 09:57:39.205334       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0814 09:57:39.714935       1 trace.go:205] Trace[1543031706]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:57:37.285) (total time: 2428ms):
	Trace[1543031706]: ---"About to write a response" 2427ms (09:57:00.713)
	Trace[1543031706]: [2.42899888s] [2.42899888s] END
	I0814 09:57:39.716089       1 trace.go:205] Trace[383521653]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Aug-2021 09:57:38.323) (total time: 1392ms):
	Trace[383521653]: [1.392953815s] [1.392953815s] END
	I0814 09:57:39.716396       1 trace.go:205] Trace[384732503]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:57:38.323) (total time: 1393ms):
	Trace[384732503]: ---"Listing from storage done" 1393ms (09:57:00.716)
	Trace[384732503]: [1.393295026s] [1.393295026s] END
	I0814 09:57:39.718087       1 trace.go:205] Trace[1335615492]: "Get" url:/api/v1/nodes/default-k8s-different-port-20210814095040-6746,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Aug-2021 09:57:36.617) (total time: 3100ms):
	Trace[1335615492]: ---"About to write a response" 3098ms (09:57:00.715)
	Trace[1335615492]: [3.100859358s] [3.100859358s] END
	I0814 09:57:39.719191       1 trace.go:205] Trace[1060095040]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-zjjkn,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (14-Aug-2021 09:57:36.622) (total time: 3097ms):
	Trace[1060095040]: ---"About to write a response" 3096ms (09:57:00.718)
	Trace[1060095040]: [3.097139245s] [3.097139245s] END
	
	* 
	* ==> kube-controller-manager [d1f9978c5866c20dc51963b2dea6a5a939391a91859233faa6c39a5706ec6bde] <==
	* I0814 09:57:29.218290       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0814 09:57:29.224701       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0814 09:57:29.224764       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0814 09:57:29.307018       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-2ms26"
	I0814 09:57:29.727668       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0814 09:57:29.737743       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.745620       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:57:29.752541       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.752882       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:57:29.761720       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0814 09:57:29.803705       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.803777       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:57:29.814808       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.819942       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0814 09:57:29.829759       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.830026       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.832833       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.832896       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0814 09:57:29.833281       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0814 09:57:29.833317       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0814 09:57:29.906023       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-54btm"
	I0814 09:57:29.908430       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-hjr7d"
	I0814 09:57:31.646181       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0814 09:57:56.974094       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0814 09:57:57.340975       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f0f81c09afef0fe8ea79cd9491a2b047484b1e6915ba7513902429bb9142f00d] <==
	* I0814 09:57:27.593121       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0814 09:57:27.593170       1 server_others.go:140] Detected node IP 192.168.49.2
	W0814 09:57:27.593222       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0814 09:57:27.680556       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0814 09:57:27.680590       1 server_others.go:212] Using iptables Proxier.
	I0814 09:57:27.680606       1 server_others.go:219] creating dualStackProxier for iptables.
	W0814 09:57:27.680620       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0814 09:57:27.681009       1 server.go:643] Version: v1.21.3
	I0814 09:57:27.684922       1 config.go:224] Starting endpoint slice config controller
	I0814 09:57:27.685061       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0814 09:57:27.685234       1 config.go:315] Starting service config controller
	I0814 09:57:27.685242       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0814 09:57:27.705654       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0814 09:57:27.722530       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0814 09:57:27.801538       1 shared_informer.go:247] Caches are synced for service config 
	I0814 09:57:27.801620       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a49b6961ce490a729c64296154827b1b23bde3f488aafe4b6b4ea68d5283fef3] <==
	* I0814 09:57:10.819379       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:57:10.819419       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 09:57:10.819623       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0814 09:57:10.819686       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0814 09:57:10.820657       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 09:57:10.824237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:57:10.824293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:57:10.824359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:57:10.824388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:10.824411       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 09:57:10.824528       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 09:57:10.824536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 09:57:10.824662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:10.826134       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 09:57:10.826228       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:57:10.826420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 09:57:10.826687       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:10.826764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:11.671044       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 09:57:11.714556       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 09:57:11.791088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 09:57:11.833580       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 09:57:11.902766       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 09:57:11.905723       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0814 09:57:12.320512       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2021-08-14 09:52:08 UTC, end at Sat 2021-08-14 09:57:57 UTC. --
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:30.006019    4822 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk7dc\" (UniqueName: \"kubernetes.io/projected/2936fbc6-dc7a-429f-b4ae-fa739e5e2c42-kube-api-access-kk7dc\") pod \"kubernetes-dashboard-6fcdf4f6d-hjr7d\" (UID: \"2936fbc6-dc7a-429f-b4ae-fa739e5e2c42\") "
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409203    4822 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409265    4822 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409453    4822 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wbv6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2ms26_kube-system(0f0c284a-68cf-4545-835a-464713e03dfc): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 14 09:57:30 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:30.409520    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2ms26" podUID=0f0c284a-68cf-4545-835a-464713e03dfc
	Aug 14 09:57:31 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:31.134570    4822 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 14 09:57:31 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:31.135408    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-2ms26" podUID=0f0c284a-68cf-4545-835a-464713e03dfc
	Aug 14 09:57:41 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:41.223821    4822 scope.go:111] "RemoveContainer" containerID="a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:42.227218    4822 scope.go:111] "RemoveContainer" containerID="a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:42.227451    4822 scope.go:111] "RemoveContainer" containerID="6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:42.227766    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-54btm_kubernetes-dashboard(dfc36e40-fb64-41d8-a005-ab76555690d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-54btm" podUID=dfc36e40-fb64-41d8-a005-ab76555690d0
	Aug 14 09:57:42 default-k8s-different-port-20210814095040-6746 kubelet[4822]: W0814 09:57:42.361181    4822 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddfc36e40-fb64-41d8-a005-ab76555690d0/a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06 WatchSource:0}: container "a7d663ee45e126de74d8678b211eff7926113c16a7dd4fe7d2de023c7b512e06" in namespace "k8s.io": not found
	Aug 14 09:57:43 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:43.230896    4822 scope.go:111] "RemoveContainer" containerID="6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	Aug 14 09:57:43 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:43.231274    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-54btm_kubernetes-dashboard(dfc36e40-fb64-41d8-a005-ab76555690d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-54btm" podUID=dfc36e40-fb64-41d8-a005-ab76555690d0
	Aug 14 09:57:43 default-k8s-different-port-20210814095040-6746 kubelet[4822]: W0814 09:57:43.866539    4822 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddfc36e40-fb64-41d8-a005-ab76555690d0/6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c WatchSource:0}: task 6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c not found: not found
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080169    4822 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080226    4822 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080365    4822 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wbv6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-2ms26_kube-system(0f0c284a-68cf-4545-835a-464713e03dfc): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host
	Aug 14 09:57:44 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:44.080423    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.49.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-2ms26" podUID=0f0c284a-68cf-4545-835a-464713e03dfc
	Aug 14 09:57:49 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:49.919811    4822 scope.go:111] "RemoveContainer" containerID="6c854f8b0cd1a6147ce01700b8f13a10234bf7f7e44a607d5db32a818ef3999c"
	Aug 14 09:57:49 default-k8s-different-port-20210814095040-6746 kubelet[4822]: E0814 09:57:49.920261    4822 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-54btm_kubernetes-dashboard(dfc36e40-fb64-41d8-a005-ab76555690d0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-54btm" podUID=dfc36e40-fb64-41d8-a005-ab76555690d0
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 kubelet[4822]: I0814 09:57:52.592691    4822 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 systemd[1]: kubelet.service: Succeeded.
	Aug 14 09:57:52 default-k8s-different-port-20210814095040-6746 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [9d1091049f869da07510f4d348474719b7062232e0caf2328573ed94c5ad0526] <==
	* 2021/08/14 09:57:31 Using namespace: kubernetes-dashboard
	2021/08/14 09:57:31 Using in-cluster config to connect to apiserver
	2021/08/14 09:57:31 Using secret token for csrf signing
	2021/08/14 09:57:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/14 09:57:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/14 09:57:31 Successful initial request to the apiserver, version: v1.21.3
	2021/08/14 09:57:31 Generating JWE encryption key
	2021/08/14 09:57:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/14 09:57:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/14 09:57:32 Initializing JWE encryption key from synchronized object
	2021/08/14 09:57:32 Creating in-cluster Sidecar client
	2021/08/14 09:57:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/14 09:57:32 Serving insecurely on HTTP port: 9090
	2021/08/14 09:57:31 Starting overwatch
	
	* 
	* ==> storage-provisioner [82e9e90eeb526cacbe952950a7f7eca68c2f5de7f144498761bfb29afd358557] <==
	* I0814 09:57:30.846872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 09:57:30.860613       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 09:57:30.860668       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 09:57:30.866955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 09:57:30.867063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210814095040-6746_e9f82a95-6a81-4f0d-a23c-89281c09e4f3!
	I0814 09:57:30.867812       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34300b54-b6f4-4572-b965-67fa08c77a62", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210814095040-6746_e9f82a95-6a81-4f0d-a23c-89281c09e4f3 became leader
	I0814 09:57:30.967510       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210814095040-6746_e9f82a95-6a81-4f0d-a23c-89281c09e4f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746: exit status 2 (345.867065ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-2ms26
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 describe pod metrics-server-7c784ccb57-2ms26
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210814095040-6746 describe pod metrics-server-7c784ccb57-2ms26: exit status 1 (71.821797ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-2ms26" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20210814095040-6746 describe pod metrics-server-7c784ccb57-2ms26: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (5.84s)
E0814 10:03:31.456509    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:51.936766    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory

                                                
                                    

Test pass (227/264)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 12.44
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.21.3/json-events 9.6
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.06
17 TestDownloadOnly/v1.22.0-rc.0/json-events 12.23
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.35
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
25 TestDownloadOnlyKic 7.84
26 TestOffline 99.2
29 TestAddons/parallel/Registry 17.11
30 TestAddons/parallel/Ingress 38.33
31 TestAddons/parallel/MetricsServer 5.61
32 TestAddons/parallel/HelmTiller 8.38
33 TestAddons/parallel/Olm 51.42
34 TestAddons/parallel/CSI 53.31
35 TestAddons/parallel/GCPAuth 43.94
36 TestCertOptions 47.1
38 TestForceSystemdFlag 52.18
39 TestForceSystemdEnv 47.06
40 TestKVMDriverInstallOrUpdate 3.62
44 TestErrorSpam/setup 43.5
45 TestErrorSpam/start 0.91
46 TestErrorSpam/status 0.9
47 TestErrorSpam/pause 3.38
48 TestErrorSpam/unpause 1.21
49 TestErrorSpam/stop 23.44
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 73.69
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 15.38
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.21
60 TestFunctional/serial/CacheCmd/cache/add_remote 2.38
61 TestFunctional/serial/CacheCmd/cache/add_local 1.36
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
63 TestFunctional/serial/CacheCmd/cache/list 0.05
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
66 TestFunctional/serial/CacheCmd/cache/delete 0.1
67 TestFunctional/serial/MinikubeKubectlCmd 0.1
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
69 TestFunctional/serial/ExtraConfig 38.04
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.1
72 TestFunctional/serial/LogsFileCmd 0.97
74 TestFunctional/parallel/ConfigCmd 0.41
75 TestFunctional/parallel/DashboardCmd 2.79
76 TestFunctional/parallel/DryRun 0.57
77 TestFunctional/parallel/InternationalLanguage 0.22
78 TestFunctional/parallel/StatusCmd 0.93
81 TestFunctional/parallel/ServiceCmd 28.52
82 TestFunctional/parallel/AddonsCmd 0.17
83 TestFunctional/parallel/PersistentVolumeClaim 49.1
85 TestFunctional/parallel/SSHCmd 0.56
86 TestFunctional/parallel/CpCmd 0.56
87 TestFunctional/parallel/MySQL 17.85
88 TestFunctional/parallel/FileSync 0.33
89 TestFunctional/parallel/CertSync 1.68
93 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/LoadImage 1.79
95 TestFunctional/parallel/RemoveImage 2.23
96 TestFunctional/parallel/LoadImageFromFile 1.49
97 TestFunctional/parallel/BuildImage 3.37
98 TestFunctional/parallel/ListImages 0.3
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
102 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
109 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
110 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
111 TestFunctional/parallel/ProfileCmd/profile_list 0.36
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
113 TestFunctional/parallel/MountCmd/any-port 9.69
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.8
116 TestFunctional/parallel/MountCmd/specific-port 1.65
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.03
122 TestFunctional/delete_minikube_cached_images 0.03
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
147 TestKicCustomNetwork/create_custom_network 34.53
148 TestKicCustomNetwork/use_default_bridge_network 24.38
149 TestKicExistingNetwork 25.03
150 TestMainNoArgs 0.05
153 TestMultiNode/serial/FreshStart2Nodes 111.86
154 TestMultiNode/serial/DeployApp2Nodes 4.86
155 TestMultiNode/serial/PingHostFrom2Pods 0.83
156 TestMultiNode/serial/AddNode 42.07
157 TestMultiNode/serial/ProfileList 0.29
158 TestMultiNode/serial/CopyFile 2.28
159 TestMultiNode/serial/StopNode 21.8
160 TestMultiNode/serial/StartAfterStop 36.12
161 TestMultiNode/serial/RestartKeepsNodes 193.74
162 TestMultiNode/serial/DeleteNode 24.64
163 TestMultiNode/serial/StopMultiNode 41.39
164 TestMultiNode/serial/RestartMultiNode 91.68
165 TestMultiNode/serial/ValidateNameConflict 45.86
171 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
172 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.85
174 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
175 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10.21
177 TestDebPackageInstall/install_amd64_debian:10/minikube 0
178 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 11.11
180 TestDebPackageInstall/install_amd64_debian:9/minikube 0
181 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.27
183 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 15.54
186 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
187 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 14.72
189 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
190 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 15.16
192 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
193 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 13.77
194 TestPreload 133.56
199 TestInsufficientStorage 12.95
202 TestKubernetesUpgrade 193.33
203 TestMissingContainerUpgrade 142.85
212 TestPause/serial/Start 70.37
220 TestNetworkPlugins/group/false 0.74
224 TestPause/serial/SecondStartNoReconfiguration 22.37
227 TestStartStop/group/old-k8s-version/serial/FirstStart 124.6
229 TestPause/serial/Unpause 0.75
231 TestPause/serial/DeletePaused 3.22
232 TestPause/serial/VerifyDeletedResources 0.75
233 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
235 TestStartStop/group/no-preload/serial/FirstStart 92.27
236 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.7
237 TestStartStop/group/old-k8s-version/serial/Stop 20.78
238 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
239 TestStartStop/group/old-k8s-version/serial/SecondStart 87.68
240 TestStartStop/group/no-preload/serial/DeployApp 8.33
241 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.65
242 TestStartStop/group/no-preload/serial/Stop 20.64
243 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
244 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
245 TestStartStop/group/no-preload/serial/SecondStart 321.96
246 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
247 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
250 TestStartStop/group/embed-certs/serial/FirstStart 75.3
251 TestStartStop/group/embed-certs/serial/DeployApp 8.37
253 TestStartStop/group/embed-certs/serial/Stop 20.56
254 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
255 TestStartStop/group/embed-certs/serial/SecondStart 343.86
256 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
257 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
258 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
261 TestStartStop/group/default-k8s-different-port/serial/FirstStart 56.51
262 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
264 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
266 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.5
267 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.69
268 TestStartStop/group/default-k8s-different-port/serial/Stop 20.8
269 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.18
270 TestStartStop/group/default-k8s-different-port/serial/SecondStart 335.12
272 TestStartStop/group/newest-cni/serial/FirstStart 59.8
273 TestStartStop/group/newest-cni/serial/DeployApp 0
274 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.56
275 TestStartStop/group/newest-cni/serial/Stop 20.66
276 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
277 TestStartStop/group/newest-cni/serial/SecondStart 34.7
278 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
279 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
280 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
282 TestNetworkPlugins/group/auto/Start 70.46
283 TestNetworkPlugins/group/auto/KubeletFlags 0.28
284 TestNetworkPlugins/group/auto/NetCatPod 9.28
285 TestNetworkPlugins/group/auto/DNS 0.2
286 TestNetworkPlugins/group/auto/Localhost 0.16
287 TestNetworkPlugins/group/auto/HairPin 0.15
288 TestNetworkPlugins/group/custom-weave/Start 73.56
289 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
290 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.09
291 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.3
293 TestNetworkPlugins/group/cilium/Start 107.17
294 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.28
295 TestNetworkPlugins/group/custom-weave/NetCatPod 9.25
296 TestNetworkPlugins/group/calico/Start 83.93
297 TestNetworkPlugins/group/calico/ControllerPod 5.02
298 TestNetworkPlugins/group/cilium/ControllerPod 6.01
299 TestNetworkPlugins/group/calico/KubeletFlags 0.28
300 TestNetworkPlugins/group/calico/NetCatPod 10.28
301 TestNetworkPlugins/group/cilium/KubeletFlags 0.29
302 TestNetworkPlugins/group/cilium/NetCatPod 9.41
303 TestNetworkPlugins/group/calico/DNS 0.16
304 TestNetworkPlugins/group/calico/Localhost 0.15
305 TestNetworkPlugins/group/calico/HairPin 0.14
306 TestNetworkPlugins/group/cilium/DNS 0.15
307 TestNetworkPlugins/group/cilium/Localhost 0.15
308 TestNetworkPlugins/group/cilium/HairPin 0.19
309 TestNetworkPlugins/group/enable-default-cni/Start 244.85
310 TestNetworkPlugins/group/kindnet/Start 71.65
311 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
313 TestNetworkPlugins/group/kindnet/NetCatPod 8.24
314 TestNetworkPlugins/group/kindnet/DNS 0.17
315 TestNetworkPlugins/group/kindnet/Localhost 0.13
316 TestNetworkPlugins/group/kindnet/HairPin 0.13
317 TestNetworkPlugins/group/bridge/Start 95.25
318 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
319 TestNetworkPlugins/group/bridge/NetCatPod 7.24
320 TestNetworkPlugins/group/bridge/DNS 0.14
321 TestNetworkPlugins/group/bridge/Localhost 0.13
322 TestNetworkPlugins/group/bridge/HairPin 0.15
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
x
+
TestDownloadOnly/v1.14.0/json-events (12.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210814090438-6746 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210814090438-6746 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.444483413s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (12.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210814090438-6746
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210814090438-6746: exit status 85 (59.278906ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:04:38
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:04:38.143465    6758 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:04:38.143626    6758 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:04:38.143634    6758 out.go:311] Setting ErrFile to fd 2...
	I0814 09:04:38.143637    6758 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:04:38.143721    6758 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	W0814 09:04:38.143822    6758 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/config/config.json: no such file or directory
	I0814 09:04:38.144016    6758 out.go:305] Setting JSON to true
	I0814 09:04:38.178423    6758 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":2840,"bootTime":1628929038,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:04:38.178495    6758 start.go:121] virtualization: kvm guest
	I0814 09:04:38.181246    6758 notify.go:169] Checking for updates...
	I0814 09:04:38.183005    6758 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:04:38.225931    6758 docker.go:132] docker version: linux-19.03.15
	I0814 09:04:38.226001    6758 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:04:38.521997    6758 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-14 09:04:38.256698971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:04:38.522079    6758 docker.go:244] overlay module found
	I0814 09:04:38.523675    6758 start.go:278] selected driver: docker
	I0814 09:04:38.523689    6758 start.go:751] validating driver "docker" against <nil>
	I0814 09:04:38.524108    6758 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:04:38.599225    6758 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-14 09:04:38.556148648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:04:38.599321    6758 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0814 09:04:38.599775    6758 start_flags.go:344] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0814 09:04:38.599855    6758 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:04:38.599871    6758 cni.go:93] Creating CNI manager for ""
	I0814 09:04:38.599879    6758 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:04:38.599888    6758 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:04:38.599895    6758 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0814 09:04:38.599902    6758 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:04:38.599910    6758 start_flags.go:277] config:
	{Name:download-only-20210814090438-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210814090438-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:04:38.601746    6758 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:04:38.603089    6758 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:04:38.603123    6758 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:04:38.629882    6758 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:04:38.629912    6758 cache.go:56] Caching tarball of preloaded images
	I0814 09:04:38.630111    6758 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0814 09:04:38.631649    6758 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0814 09:04:38.659467    6758 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:8891d3d5a9795ff90493434142d1724b -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:04:38.673511    6758 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:04:38.673531    6758 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:04:47.763481    6758 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0814 09:04:47.763572    6758 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210814090438-6746"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (9.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210814090438-6746 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210814090438-6746 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.595249131s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (9.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210814090438-6746
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210814090438-6746: exit status 85 (61.172577ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:04:50
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:04:50.648561    6900 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:04:50.648621    6900 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:04:50.648631    6900 out.go:311] Setting ErrFile to fd 2...
	I0814 09:04:50.648634    6900 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:04:50.648729    6900 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	W0814 09:04:50.648843    6900 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/config/config.json: no such file or directory
	I0814 09:04:50.648937    6900 out.go:305] Setting JSON to true
	I0814 09:04:50.682502    6900 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":2853,"bootTime":1628929038,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:04:50.682613    6900 start.go:121] virtualization: kvm guest
	I0814 09:04:50.685001    6900 notify.go:169] Checking for updates...
	I0814 09:04:50.687807    6900 config.go:177] Loaded profile config "download-only-20210814090438-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W0814 09:04:50.687864    6900 start.go:659] api.Load failed for download-only-20210814090438-6746: filestore "download-only-20210814090438-6746": Docker machine "download-only-20210814090438-6746" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0814 09:04:50.687916    6900 driver.go:335] Setting default libvirt URI to qemu:///system
	W0814 09:04:50.687971    6900 start.go:659] api.Load failed for download-only-20210814090438-6746: filestore "download-only-20210814090438-6746": Docker machine "download-only-20210814090438-6746" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0814 09:04:50.729058    6900 docker.go:132] docker version: linux-19.03.15
	I0814 09:04:50.729158    6900 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:04:50.802588    6900 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-14 09:04:50.759431796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:04:50.802671    6900 docker.go:244] overlay module found
	I0814 09:04:50.804580    6900 start.go:278] selected driver: docker
	I0814 09:04:50.804598    6900 start.go:751] validating driver "docker" against &{Name:download-only-20210814090438-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210814090438-6746 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:04:50.805149    6900 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:04:50.881161    6900 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-14 09:04:50.836112596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:04:50.881678    6900 cni.go:93] Creating CNI manager for ""
	I0814 09:04:50.881693    6900 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:04:50.881703    6900 start_flags.go:277] config:
	{Name:download-only-20210814090438-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210814090438-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:04:50.883562    6900 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:04:50.884860    6900 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:04:50.884958    6900 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:04:50.911655    6900 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:04:50.911681    6900 cache.go:56] Caching tarball of preloaded images
	I0814 09:04:50.911927    6900 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0814 09:04:50.913592    6900 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	I0814 09:04:50.943142    6900 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:6ee74ddc722ac9485c71891d6e62193d -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0814 09:04:50.954496    6900 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:04:50.954518    6900 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210814090438-6746"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (12.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210814090438-6746 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210814090438-6746 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.234241501s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (12.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210814090438-6746
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210814090438-6746: exit status 85 (61.355801ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/14 09:05:00
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:05:00.305557    7041 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:05:00.305647    7041 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:05:00.305655    7041 out.go:311] Setting ErrFile to fd 2...
	I0814 09:05:00.305658    7041 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:05:00.305743    7041 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	W0814 09:05:00.305838    7041 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/config/config.json: no such file or directory
	I0814 09:05:00.305919    7041 out.go:305] Setting JSON to true
	I0814 09:05:00.339717    7041 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":2863,"bootTime":1628929038,"procs":138,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:05:00.339810    7041 start.go:121] virtualization: kvm guest
	I0814 09:05:00.342389    7041 notify.go:169] Checking for updates...
	I0814 09:05:00.344570    7041 config.go:177] Loaded profile config "download-only-20210814090438-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	W0814 09:05:00.344609    7041 start.go:659] api.Load failed for download-only-20210814090438-6746: filestore "download-only-20210814090438-6746": Docker machine "download-only-20210814090438-6746" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0814 09:05:00.344643    7041 driver.go:335] Setting default libvirt URI to qemu:///system
	W0814 09:05:00.344675    7041 start.go:659] api.Load failed for download-only-20210814090438-6746: filestore "download-only-20210814090438-6746": Docker machine "download-only-20210814090438-6746" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0814 09:05:00.387669    7041 docker.go:132] docker version: linux-19.03.15
	I0814 09:05:00.387771    7041 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:05:00.462379    7041 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-14 09:05:00.419149471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:05:00.462472    7041 docker.go:244] overlay module found
	I0814 09:05:00.464345    7041 start.go:278] selected driver: docker
	I0814 09:05:00.464355    7041 start.go:751] validating driver "docker" against &{Name:download-only-20210814090438-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210814090438-6746 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:05:00.464817    7041 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:05:00.537334    7041 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-14 09:05:00.496217741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:05:00.537841    7041 cni.go:93] Creating CNI manager for ""
	I0814 09:05:00.537858    7041 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0814 09:05:00.537868    7041 start_flags.go:277] config:
	{Name:download-only-20210814090438-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210814090438-6746 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:05:00.539732    7041 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0814 09:05:00.541154    7041 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:05:00.541252    7041 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0814 09:05:00.601524    7041 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:05:00.601542    7041 cache.go:56] Caching tarball of preloaded images
	I0814 09:05:00.601770    7041 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0814 09:05:00.603674    7041 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0814 09:05:00.611850    7041 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0814 09:05:00.611868    7041 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0814 09:05:00.630132    7041 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:569167d620e883cc7aa194927ed83d26 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0814 09:05:10.109688    7041 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0814 09:05:10.109795    7041 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210814090438-6746"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210814090438-6746
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.84s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210814090513-6746 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210814090513-6746 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (5.734402922s)
helpers_test.go:176: Cleaning up "download-docker-20210814090513-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210814090513-6746
--- PASS: TestDownloadOnlyKic (7.84s)

                                                
                                    
x
+
TestOffline (99.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20210814093232-6746 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20210814093232-6746 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m36.282348952s)
helpers_test.go:176: Cleaning up "offline-containerd-20210814093232-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20210814093232-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20210814093232-6746: (2.915353816s)
--- PASS: TestOffline (99.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: registry stabilized in 15.931909ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-q9s6s" [95002620-3742-4652-b34b-d96b5e6ff868] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015815896s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-v8z5r" [a2e997be-3346-4f74-b057-01f3290886ed] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005822848s
addons_test.go:307: (dbg) Run:  kubectl --context addons-20210814090521-6746 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:312: (dbg) Run:  kubectl --context addons-20210814090521-6746 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:312: (dbg) Done: kubectl --context addons-20210814090521-6746 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.540172268s)
addons_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 ip
2021/08/14 09:07:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (38.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-8zn4c" [aa97ba97-dc40-46fa-9b3a-fd9afd8ebd40] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 4.52505ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210814090521-6746 replace --force -f testdata/nginx-ingv1beta.yaml
addons_test.go:170: kubectl --context addons-20210814090521-6746 replace --force -f testdata/nginx-ingv1beta.yaml: unexpected stderr: Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temporary)
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210814090521-6746 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [fb123d72-41f7-4abe-952f-b765f83700c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [fb123d72-41f7-4abe-952f-b765f83700c4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00553201s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:230: (dbg) Run:  kubectl --context addons-20210814090521-6746 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable ingress --alsologtostderr -v=1: (28.933505681s)
--- PASS: TestAddons/parallel/Ingress (38.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: metrics-server stabilized in 14.589651ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-7vp7l" [b52a9045-3344-4d41-bbad-aa5ea22bcb35] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01729045s
addons_test.go:382: (dbg) Run:  kubectl --context addons-20210814090521-6746 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.38s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: tiller-deploy stabilized in 1.619259ms
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-dlf7v" [8c3b7bfd-7742-4c2e-8357-285e335bf6cf] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006967231s
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210814090521-6746 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Done: kubectl --context addons-20210814090521-6746 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (3.002526704s)
addons_test.go:457: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.38s)

                                                
                                    
x
+
TestAddons/parallel/Olm (51.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: catalog-operator stabilized in 14.083474ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:480: olm-operator stabilized in 17.146145ms
addons_test.go:484: packageserver stabilized in 19.615486ms
addons_test.go:486: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "catalog-operator-75d496484d-6j2b4" [e22c9350-ab6f-4f9b-b209-6c5fb82d3682] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:486: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.010380509s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:489: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "olm-operator-859c88c96-2lzs9" [3b1b0ff0-92fd-44b0-94ad-a26f0d98d689] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:489: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.005676777s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:492: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-7f84788d7d-7nmcb" [eb5304ab-33c9-4540-a6ad-fb573b096a68] Running
helpers_test.go:343: "packageserver-7f84788d7d-f62c9" [d40e6a7f-8f1d-4318-856d-15870de9379f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-7f84788d7d-7nmcb" [eb5304ab-33c9-4540-a6ad-fb573b096a68] Running
helpers_test.go:343: "packageserver-7f84788d7d-f62c9" [d40e6a7f-8f1d-4318-856d-15870de9379f] Running
helpers_test.go:343: "packageserver-7f84788d7d-7nmcb" [eb5304ab-33c9-4540-a6ad-fb573b096a68] Running
helpers_test.go:343: "packageserver-7f84788d7d-f62c9" [d40e6a7f-8f1d-4318-856d-15870de9379f] Running
helpers_test.go:343: "packageserver-7f84788d7d-7nmcb" [eb5304ab-33c9-4540-a6ad-fb573b096a68] Running
helpers_test.go:343: "packageserver-7f84788d7d-f62c9" [d40e6a7f-8f1d-4318-856d-15870de9379f] Running
helpers_test.go:343: "packageserver-7f84788d7d-7nmcb" [eb5304ab-33c9-4540-a6ad-fb573b096a68] Running
helpers_test.go:343: "packageserver-7f84788d7d-f62c9" [d40e6a7f-8f1d-4318-856d-15870de9379f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-7f84788d7d-7nmcb" [eb5304ab-33c9-4540-a6ad-fb573b096a68] Running
addons_test.go:492: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.006911258s
addons_test.go:495: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-b2s2r" [efbd6650-b02d-49c1-aceb-9a15b4591aa5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:495: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.005646321s
addons_test.go:500: (dbg) Run:  kubectl --context addons-20210814090521-6746 create -f testdata/etcd.yaml
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210814090521-6746 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210814090521-6746 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210814090521-6746 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210814090521-6746 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210814090521-6746 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210814090521-6746 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210814090521-6746 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210814090521-6746 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (51.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 4.575989ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-20210814090521-6746 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210814090521-6746 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-20210814090521-6746 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [3ebab47f-4a19-47c1-a0dc-21a384fa3399] Pending
helpers_test.go:343: "task-pv-pod" [3ebab47f-4a19-47c1-a0dc-21a384fa3399] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [3ebab47f-4a19-47c1-a0dc-21a384fa3399] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.007976354s
addons_test.go:562: (dbg) Run:  kubectl --context addons-20210814090521-6746 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210814090521-6746 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210814090521-6746 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-20210814090521-6746 delete pod task-pv-pod
addons_test.go:572: (dbg) Done: kubectl --context addons-20210814090521-6746 delete pod task-pv-pod: (2.518469594s)
addons_test.go:578: (dbg) Run:  kubectl --context addons-20210814090521-6746 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-20210814090521-6746 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210814090521-6746 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210814090521-6746 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-20210814090521-6746 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [8d2e7b2e-192b-49db-abb9-41b92f1d73b2] Pending
helpers_test.go:343: "task-pv-pod-restore" [8d2e7b2e-192b-49db-abb9-41b92f1d73b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [8d2e7b2e-192b-49db-abb9-41b92f1d73b2] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.008224041s
addons_test.go:604: (dbg) Run:  kubectl --context addons-20210814090521-6746 delete pod task-pv-pod-restore
addons_test.go:604: (dbg) Done: kubectl --context addons-20210814090521-6746 delete pod task-pv-pod-restore: (10.047548853s)
addons_test.go:608: (dbg) Run:  kubectl --context addons-20210814090521-6746 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-20210814090521-6746 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.956920292s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.31s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (43.94s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:631: (dbg) Run:  kubectl --context addons-20210814090521-6746 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:637: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ca9dce2d-bc83-4f42-884a-a52a21cba285] Pending
helpers_test.go:343: "busybox" [ca9dce2d-bc83-4f42-884a-a52a21cba285] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [ca9dce2d-bc83-4f42-884a-a52a21cba285] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:637: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.006372342s
addons_test.go:643: (dbg) Run:  kubectl --context addons-20210814090521-6746 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:680: (dbg) Run:  kubectl --context addons-20210814090521-6746 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210814090521-6746 apply -f testdata/private-image.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-ttdmx" [fae2bb28-8f83-42b3-84ee-0dad91992150] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-ttdmx" [fae2bb28-8f83-42b3-84ee-0dad91992150] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 11.005549217s
addons_test.go:709: (dbg) Run:  kubectl --context addons-20210814090521-6746 apply -f testdata/private-image-eu.yaml
addons_test.go:716: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-7zb5z" [91df8aeb-b72d-4589-942f-0001392ddf2f] Pending
helpers_test.go:343: "private-image-eu-5956d58f9f-7zb5z" [91df8aeb-b72d-4589-942f-0001392ddf2f] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-7zb5z" [91df8aeb-b72d-4589-942f-0001392ddf2f] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:716: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 10.006183294s
addons_test.go:722: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:722: (dbg) Done: out/minikube-linux-amd64 -p addons-20210814090521-6746 addons disable gcp-auth --alsologtostderr -v=1: (12.789226674s)
--- PASS: TestAddons/parallel/GCPAuth (43.94s)

                                                
                                    
x
+
TestCertOptions (47.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210814093815-6746 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210814093815-6746 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (43.842278552s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210814093815-6746 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210814093815-6746 config view
helpers_test.go:176: Cleaning up "cert-options-20210814093815-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210814093815-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210814093815-6746: (2.943953413s)
--- PASS: TestCertOptions (47.10s)

                                                
                                    
x
+
TestForceSystemdFlag (52.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210814093636-6746 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210814093636-6746 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (48.960399508s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210814093636-6746 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210814093636-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210814093636-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210814093636-6746: (2.954287534s)
--- PASS: TestForceSystemdFlag (52.18s)

                                                
                                    
x
+
TestForceSystemdEnv (47.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210814093728-6746 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210814093728-6746 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.824994817s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210814093728-6746 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20210814093728-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210814093728-6746

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210814093728-6746: (2.971011091s)
--- PASS: TestForceSystemdEnv (47.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.62s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.62s)

                                                
                                    
x
+
TestErrorSpam/setup (43.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210814090915-6746 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210814090915-6746 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210814090915-6746 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210814090915-6746 --driver=docker  --container-runtime=containerd: (43.494866831s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (43.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 start --dry-run
--- PASS: TestErrorSpam/start (0.91s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (3.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 pause: exit status 80 (2.129219552s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210814090915-6746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 327f6a5f8f681e8981dc68b5f7dbd4f1e65e1b6060a1f2f77a8ade07a89d9812 50c3fedcee5b1493d6247f0c383dfe25410445f0afe8d257efd055d9d556b187: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-14T09:10:02Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 pause
--- PASS: TestErrorSpam/pause (3.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 unpause
--- PASS: TestErrorSpam/unpause (1.21s)

                                                
                                    
x
+
TestErrorSpam/stop (23.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 stop: (23.186514466s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210814090915-6746 --log_dir /tmp/nospam-20210814090915-6746 stop
--- PASS: TestErrorSpam/stop (23.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/files/etc/test/nested/copy/6746/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210814091034-6746 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210814091034-6746 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m13.684740534s)
--- PASS: TestFunctional/serial/StartWithProxy (73.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210814091034-6746 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210814091034-6746 --alsologtostderr -v=8: (15.383467727s)
functional_test.go:631: soft start took 15.384088826s for "functional-20210814091034-6746" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210814091034-6746 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210814091034-6746 /tmp/functional-20210814091034-6746989661056
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 cache add minikube-local-cache-test:functional-20210814091034-6746
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210814091034-6746 cache add minikube-local-cache-test:functional-20210814091034-6746: (1.093160572s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 cache delete minikube-local-cache-test:functional-20210814091034-6746
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210814091034-6746
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (272.766688ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 cache reload
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 kubectl -- --context functional-20210814091034-6746 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210814091034-6746 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210814091034-6746 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0814 09:12:38.025619    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:38.031376    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:38.041596    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:38.062085    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:38.102317    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:38.182539    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:38.343034    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:38.663583    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:39.304539    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:40.584994    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:12:43.145786    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210814091034-6746 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.041235609s)
functional_test.go:719: restart took 38.041349732s for "functional-20210814091034-6746" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210814091034-6746 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 logs
E0814 09:12:48.266924    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210814091034-6746 logs: (1.099377783s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 logs --file /tmp/functional-20210814091034-6746891131615/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 config get cpus: exit status 14 (51.295375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 config get cpus: exit status 14 (67.413041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210814091034-6746 --alsologtostderr -v=1]
2021/08/14 09:13:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210814091034-6746 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 40058: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210814091034-6746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210814091034-6746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (235.967819ms)

                                                
                                                
-- stdout --
	* [functional-20210814091034-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:13:08.914415   39701 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:13:08.914496   39701 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:13:08.914500   39701 out.go:311] Setting ErrFile to fd 2...
	I0814 09:13:08.914502   39701 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:13:08.914596   39701 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:13:08.914793   39701 out.go:305] Setting JSON to false
	I0814 09:13:08.949887   39701 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":3351,"bootTime":1628929038,"procs":234,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:13:08.949979   39701 start.go:121] virtualization: kvm guest
	I0814 09:13:08.952419   39701 out.go:177] * [functional-20210814091034-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:13:08.953815   39701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:13:08.955029   39701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:13:08.956315   39701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:13:08.957626   39701 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:13:08.958068   39701 config.go:177] Loaded profile config "functional-20210814091034-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:13:08.958457   39701 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:13:09.002293   39701 docker.go:132] docker version: linux-19.03.15
	I0814 09:13:09.002362   39701 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:13:09.087547   39701 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-14 09:13:09.03870832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:13:09.087625   39701 docker.go:244] overlay module found
	I0814 09:13:09.089692   39701 out.go:177] * Using the docker driver based on existing profile
	I0814 09:13:09.089721   39701 start.go:278] selected driver: docker
	I0814 09:13:09.089728   39701 start.go:751] validating driver "docker" against &{Name:functional-20210814091034-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210814091034-6746 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registr
y:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:13:09.089873   39701 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:13:09.089909   39701 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:13:09.089926   39701 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0814 09:13:09.091351   39701 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:13:09.093337   39701 out.go:177] 
	W0814 09:13:09.093479   39701 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0814 09:13:09.094861   39701 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210814091034-6746 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210814091034-6746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210814091034-6746 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (221.381143ms)

                                                
                                                
-- stdout --
	* [functional-20210814091034-6746] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:13:08.688144   39630 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:13:08.688243   39630 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:13:08.688251   39630 out.go:311] Setting ErrFile to fd 2...
	I0814 09:13:08.688255   39630 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:13:08.688373   39630 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:13:08.688581   39630 out.go:305] Setting JSON to false
	I0814 09:13:08.725945   39630 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":3351,"bootTime":1628929038,"procs":234,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:13:08.726033   39630 start.go:121] virtualization: kvm guest
	I0814 09:13:08.728240   39630 out.go:177] * [functional-20210814091034-6746] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0814 09:13:08.729641   39630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:13:08.730867   39630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:13:08.732136   39630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:13:08.733348   39630 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:13:08.733745   39630 config.go:177] Loaded profile config "functional-20210814091034-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:13:08.734090   39630 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:13:08.778236   39630 docker.go:132] docker version: linux-19.03.15
	I0814 09:13:08.778341   39630 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:13:08.852419   39630 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-14 09:13:08.811560981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:13:08.852496   39630 docker.go:244] overlay module found
	I0814 09:13:08.854644   39630 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0814 09:13:08.854668   39630 start.go:278] selected driver: docker
	I0814 09:13:08.854674   39630 start.go:751] validating driver "docker" against &{Name:functional-20210814091034-6746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210814091034-6746 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registr
y:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0814 09:13:08.854784   39630 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:13:08.854814   39630 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:13:08.854829   39630 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0814 09:13:08.856174   39630 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:13:08.858070   39630 out.go:177] 
	W0814 09:13:08.858169   39630 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0814 09:13:08.859391   39630 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (28.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210814091034-6746 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210814091034-6746 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-kmqjf" [5aa1d2b7-8a7d-4ce9-bd54-45208978e830] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-kmqjf" [5aa1d2b7-8a7d-4ce9-bd54-45208978e830] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 26.005625427s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Done: out/minikube-linux-amd64 -p functional-20210814091034-6746 service list: (1.334731819s)
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1394: found endpoint: https://192.168.49.2:32663
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:32663
functional_test.go:1431: Attempting to fetch http://192.168.49.2:32663 ...
functional_test.go:1450: http://192.168.49.2:32663: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-kmqjf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32663
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (28.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [1f029533-f7a6-40c5-9ded-5b33376cb56c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006541559s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210814091034-6746 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210814091034-6746 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210814091034-6746 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210814091034-6746 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [f495940a-2c97-47f3-a53a-be9b33d3142b] Pending
helpers_test.go:343: "sp-pod" [f495940a-2c97-47f3-a53a-be9b33d3142b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [f495940a-2c97-47f3-a53a-be9b33d3142b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.006266832s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210814091034-6746 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210814091034-6746 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210814091034-6746 delete -f testdata/storage-provisioner/pod.yaml: (10.071615166s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210814091034-6746 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [b5222927-bbfc-45e3-ae58-6b682418ceec] Pending
helpers_test.go:343: "sp-pod" [b5222927-bbfc-45e3-ae58-6b682418ceec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [b5222927-bbfc-45e3-ae58-6b682418ceec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.005720713s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210814091034-6746 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210814091034-6746 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-d4lv5" [287207f1-6f94-4667-aa5e-5877b6306513] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-d4lv5" [287207f1-6f94-4667-aa5e-5877b6306513] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-d4lv5" [287207f1-6f94-4667-aa5e-5877b6306513] Running
E0814 09:12:58.507319    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.014640895s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210814091034-6746 exec mysql-9bbbc5bbb-d4lv5 -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210814091034-6746 exec mysql-9bbbc5bbb-d4lv5 -- mysql -ppassword -e "show databases;": exit status 1 (207.714351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210814091034-6746 exec mysql-9bbbc5bbb-d4lv5 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210814091034-6746 exec mysql-9bbbc5bbb-d4lv5 -- mysql -ppassword -e "show databases;": exit status 1 (139.545234ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210814091034-6746 exec mysql-9bbbc5bbb-d4lv5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/6746/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /etc/test/nested/copy/6746/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/6746.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /etc/ssl/certs/6746.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/6746.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /usr/share/ca-certificates/6746.pem"
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/67462.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /etc/ssl/certs/67462.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/67462.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /usr/share/ca-certificates/67462.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210814091034-6746 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210814091034-6746

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 image load docker.io/library/busybox:load-functional-20210814091034-6746

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210814091034-6746 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210814091034-6746
--- PASS: TestFunctional/parallel/LoadImage (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210814091034-6746
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 image load docker.io/library/busybox:remove-functional-20210814091034-6746

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 image rm docker.io/library/busybox:remove-functional-20210814091034-6746
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210814091034-6746 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210814091034-6746
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210814091034-6746
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210814091034-6746 -- sudo crictl images
E0814 09:13:18.988115    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/LoadImageFromFile (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 image build -t localhost/my-image:functional-20210814091034-6746 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210814091034-6746 image build -t localhost/my-image:functional-20210814091034-6746 testdata/build: (3.086187863s)
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210814091034-6746 image build -t localhost/my-image:functional-20210814091034-6746 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 77B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 0.8s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
#4 resolve docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60 0.0s done
#4 DONE 0.0s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
#4 extracting sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2
#4 extracting sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2 0.1s done
#4 DONE 0.2s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:13f22199768f0f8ebb86cc13607e24d0452f1e070289bba4354d501445e8b022 done
#8 exporting config sha256:da1bce63228e4692a06b802b674caf765846d61e2411bc5112243f1a3afc55ce done
#8 naming to localhost/my-image:functional-20210814091034-6746 done
#8 DONE 0.1s
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210814091034-6746 -- sudo crictl inspecti localhost/my-image:functional-20210814091034-6746
--- PASS: TestFunctional/parallel/BuildImage (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210814091034-6746 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20210814091034-6746
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo systemctl is-active docker"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo systemctl is-active docker": exit status 1 (293.275511ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo systemctl is-active crio"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo systemctl is-active crio": exit status 1 (262.619983ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210814091034-6746 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210814091034-6746 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.100.236.167 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210814091034-6746 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1245: Took "294.158789ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "63.887264ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1295: Took "296.258355ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "49.97452ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210814091034-6746 /tmp/mounttest960015538:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628932386260555963" to /tmp/mounttest960015538/created-by-test
functional_test_mount_test.go:110: wrote "test-1628932386260555963" to /tmp/mounttest960015538/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628932386260555963" to /tmp/mounttest960015538/test-1628932386260555963
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.879713ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 14 09:13 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 14 09:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 14 09:13 test-1628932386260555963
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh cat /mount-9p/test-1628932386260555963

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210814091034-6746 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [37f697f9-a2e4-41fd-bed8-bac13349b04d] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [37f697f9-a2e4-41fd-bed8-bac13349b04d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [37f697f9-a2e4-41fd-bed8-bac13349b04d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.006042142s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210814091034-6746 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210814091034-6746 /tmp/mounttest960015538:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210814091034-6746 /tmp/mounttest064546409:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.448522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210814091034-6746 /tmp/mounttest064546409:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh "sudo umount -f /mount-9p": exit status 1 (281.603929ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210814091034-6746 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210814091034-6746 /tmp/mounttest064546409:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210814091034-6746 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210814091034-6746
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210814091034-6746
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210814091034-6746
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210814091034-6746
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210814091510-6746 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210814091510-6746 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.517535ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210814091510-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"4fbf5c70-edcb-44b7-b7e0-ba436f5ad87b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig"},"datacontenttype":"application/json","id":"86802ac3-4b9d-4012-a748-6a2c9140dc56","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"8005613a-f61c-4c87-a0dc-265919243344","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube"},"datacontenttype":"application/json","id":"0f0f5c8c-82c0-4a7a-a84d-49ee84058da9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=master"},"datacontenttype":"application/json","id":"b75a43eb-9377-48f6-8ce9-e799cc55bca6","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"2e83a24b-447d-4311-a220-c54d724b10bb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210814091510-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210814091510-6746
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210814091510-6746 --network=
E0814 09:15:21.870645    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210814091510-6746 --network=: (30.183646888s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210814091510-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210814091510-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210814091510-6746: (4.305063762s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210814091544-6746 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210814091544-6746 --network=bridge: (22.077814299s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210814091544-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210814091544-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210814091544-6746: (2.266388855s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.38s)

                                                
                                    
x
+
TestKicExistingNetwork (25.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210814091609-6746 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210814091609-6746 --network=existing-network: (22.358379021s)
helpers_test.go:176: Cleaning up "existing-network-20210814091609-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210814091609-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210814091609-6746: (2.433846194s)
--- PASS: TestKicExistingNetwork (25.03s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210814091634-6746 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0814 09:17:38.025100    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:17:50.189004    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:50.194257    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:50.204483    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:50.225423    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:50.265857    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:50.346938    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:50.507608    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:50.828118    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:51.469041    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:52.750232    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:17:55.312031    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:18:00.432253    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:18:05.711090    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:18:10.672512    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210814091634-6746 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m51.35160629s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- rollout status deployment/busybox: (2.913402252s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-rqcd6 -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-xnd2r -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-rqcd6 -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-xnd2r -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-rqcd6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-xnd2r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0814 09:18:31.152851    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-rqcd6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-rqcd6 -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-xnd2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210814091634-6746 -- exec busybox-84b6686758-xnd2r -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210814091634-6746 -v 3 --alsologtostderr
E0814 09:19:12.113564    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210814091634-6746 -v 3 --alsologtostderr: (41.357877482s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 cp testdata/cp-test.txt multinode-20210814091634-6746-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 ssh -n multinode-20210814091634-6746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 cp testdata/cp-test.txt multinode-20210814091634-6746-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 ssh -n multinode-20210814091634-6746-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210814091634-6746 node stop m03: (20.71830326s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210814091634-6746 status: exit status 7 (542.105002ms)

                                                
                                                
-- stdout --
	multinode-20210814091634-6746
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210814091634-6746-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210814091634-6746-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr: exit status 7 (540.649347ms)

                                                
                                                
-- stdout --
	multinode-20210814091634-6746
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210814091634-6746-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210814091634-6746-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:19:37.873524   71303 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:19:37.873601   71303 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:37.873608   71303 out.go:311] Setting ErrFile to fd 2...
	I0814 09:19:37.873611   71303 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:37.873724   71303 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:19:37.873896   71303 out.go:305] Setting JSON to false
	I0814 09:19:37.873913   71303 mustload.go:65] Loading cluster: multinode-20210814091634-6746
	I0814 09:19:37.875204   71303 config.go:177] Loaded profile config "multinode-20210814091634-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:19:37.875226   71303 status.go:253] checking status of multinode-20210814091634-6746 ...
	I0814 09:19:37.875740   71303 cli_runner.go:115] Run: docker container inspect multinode-20210814091634-6746 --format={{.State.Status}}
	I0814 09:19:37.912942   71303 status.go:328] multinode-20210814091634-6746 host status = "Running" (err=<nil>)
	I0814 09:19:37.912968   71303 host.go:66] Checking if "multinode-20210814091634-6746" exists ...
	I0814 09:19:37.913209   71303 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210814091634-6746
	I0814 09:19:37.950399   71303 host.go:66] Checking if "multinode-20210814091634-6746" exists ...
	I0814 09:19:37.950635   71303 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:19:37.950672   71303 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210814091634-6746
	I0814 09:19:37.989542   71303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/multinode-20210814091634-6746/id_rsa Username:docker}
	I0814 09:19:38.081004   71303 ssh_runner.go:149] Run: systemctl --version
	I0814 09:19:38.084345   71303 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:19:38.093300   71303 kubeconfig.go:93] found "multinode-20210814091634-6746" server: "https://192.168.49.2:8443"
	I0814 09:19:38.093323   71303 api_server.go:164] Checking apiserver status ...
	I0814 09:19:38.093349   71303 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:19:38.109651   71303 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	I0814 09:19:38.115960   71303 api_server.go:180] apiserver freezer: "6:freezer:/docker/e1db03f054d558b7019ac2130a5a4aecb74e2b10385c58373e0572a5cec8e2ae/kubepods/burstable/podcc2beb8f787eaa8d05c82d83fe5c04be/7614b71337e7f4215e1da71fa58d5fb76ae5d3cee4b18d2d6d576c7df566ace2"
	I0814 09:19:38.116011   71303 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/e1db03f054d558b7019ac2130a5a4aecb74e2b10385c58373e0572a5cec8e2ae/kubepods/burstable/podcc2beb8f787eaa8d05c82d83fe5c04be/7614b71337e7f4215e1da71fa58d5fb76ae5d3cee4b18d2d6d576c7df566ace2/freezer.state
	I0814 09:19:38.121567   71303 api_server.go:202] freezer state: "THAWED"
	I0814 09:19:38.121589   71303 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0814 09:19:38.126099   71303 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0814 09:19:38.126119   71303 status.go:419] multinode-20210814091634-6746 apiserver status = Running (err=<nil>)
	I0814 09:19:38.126129   71303 status.go:255] multinode-20210814091634-6746 status: &{Name:multinode-20210814091634-6746 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:19:38.126153   71303 status.go:253] checking status of multinode-20210814091634-6746-m02 ...
	I0814 09:19:38.126424   71303 cli_runner.go:115] Run: docker container inspect multinode-20210814091634-6746-m02 --format={{.State.Status}}
	I0814 09:19:38.163557   71303 status.go:328] multinode-20210814091634-6746-m02 host status = "Running" (err=<nil>)
	I0814 09:19:38.163575   71303 host.go:66] Checking if "multinode-20210814091634-6746-m02" exists ...
	I0814 09:19:38.163801   71303 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210814091634-6746-m02
	I0814 09:19:38.200452   71303 host.go:66] Checking if "multinode-20210814091634-6746-m02" exists ...
	I0814 09:19:38.200694   71303 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:19:38.200727   71303 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210814091634-6746-m02
	I0814 09:19:38.237205   71303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/machines/multinode-20210814091634-6746-m02/id_rsa Username:docker}
	I0814 09:19:38.320716   71303 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:19:38.328929   71303 status.go:255] multinode-20210814091634-6746-m02 status: &{Name:multinode-20210814091634-6746-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:19:38.328967   71303 status.go:253] checking status of multinode-20210814091634-6746-m03 ...
	I0814 09:19:38.329224   71303 cli_runner.go:115] Run: docker container inspect multinode-20210814091634-6746-m03 --format={{.State.Status}}
	I0814 09:19:38.366663   71303 status.go:328] multinode-20210814091634-6746-m03 host status = "Stopped" (err=<nil>)
	I0814 09:19:38.366681   71303 status.go:341] host is not running, skipping remaining checks
	I0814 09:19:38.366686   71303 status.go:255] multinode-20210814091634-6746-m03 status: &{Name:multinode-20210814091634-6746-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.80s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210814091634-6746 node start m03 --alsologtostderr: (35.307892712s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (193.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210814091634-6746
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210814091634-6746
E0814 09:20:34.034079    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210814091634-6746: (1m1.451211859s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210814091634-6746 --wait=true -v=8 --alsologtostderr
E0814 09:22:38.025029    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:22:50.188598    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:23:17.874999    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210814091634-6746 --wait=true -v=8 --alsologtostderr: (2m12.18778327s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210814091634-6746
--- PASS: TestMultiNode/serial/RestartKeepsNodes (193.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (24.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210814091634-6746 node delete m03: (23.9839426s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (24.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210814091634-6746 stop: (41.147291859s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210814091634-6746 status: exit status 7 (122.910245ms)

                                                
                                                
-- stdout --
	multinode-20210814091634-6746
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210814091634-6746-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr: exit status 7 (122.681102ms)

                                                
                                                
-- stdout --
	multinode-20210814091634-6746
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210814091634-6746-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:24:34.180430   83039 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:24:34.180507   83039 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:24:34.180517   83039 out.go:311] Setting ErrFile to fd 2...
	I0814 09:24:34.180522   83039 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:24:34.180645   83039 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:24:34.180870   83039 out.go:305] Setting JSON to false
	I0814 09:24:34.180890   83039 mustload.go:65] Loading cluster: multinode-20210814091634-6746
	I0814 09:24:34.181237   83039 config.go:177] Loaded profile config "multinode-20210814091634-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:24:34.181251   83039 status.go:253] checking status of multinode-20210814091634-6746 ...
	I0814 09:24:34.181624   83039 cli_runner.go:115] Run: docker container inspect multinode-20210814091634-6746 --format={{.State.Status}}
	I0814 09:24:34.217898   83039 status.go:328] multinode-20210814091634-6746 host status = "Stopped" (err=<nil>)
	I0814 09:24:34.217918   83039 status.go:341] host is not running, skipping remaining checks
	I0814 09:24:34.217924   83039 status.go:255] multinode-20210814091634-6746 status: &{Name:multinode-20210814091634-6746 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:24:34.217970   83039 status.go:253] checking status of multinode-20210814091634-6746-m02 ...
	I0814 09:24:34.218229   83039 cli_runner.go:115] Run: docker container inspect multinode-20210814091634-6746-m02 --format={{.State.Status}}
	I0814 09:24:34.253939   83039 status.go:328] multinode-20210814091634-6746-m02 host status = "Stopped" (err=<nil>)
	I0814 09:24:34.253968   83039 status.go:341] host is not running, skipping remaining checks
	I0814 09:24:34.253979   83039 status.go:255] multinode-20210814091634-6746-m02 status: &{Name:multinode-20210814091634-6746-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210814091634-6746 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210814091634-6746 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m31.016555462s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210814091634-6746 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (91.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210814091634-6746
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210814091634-6746-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210814091634-6746-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.935463ms)

                                                
                                                
-- stdout --
	* [multinode-20210814091634-6746-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210814091634-6746-m02' is duplicated with machine name 'multinode-20210814091634-6746-m02' in profile 'multinode-20210814091634-6746'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210814091634-6746-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210814091634-6746-m03 --driver=docker  --container-runtime=containerd: (42.709808238s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210814091634-6746
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210814091634-6746: exit status 80 (259.744323ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210814091634-6746
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210814091634-6746-m03 already exists in multinode-20210814091634-6746-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210814091634-6746-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210814091634-6746-m03: (2.743628031s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.86s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.85s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.849288551s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.85s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.21s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.206012635s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.21s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (11.11s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.105428495s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (11.11s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.27s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0814 09:27:38.025391    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.271885615s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.27s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.54s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0814 09:27:50.188772    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.539083772s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.54s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.72s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (14.71809555s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.72s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.16s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.163497788s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.16s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.77s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.76934233s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.77s)

                                                
                                    
x
+
TestPreload (133.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210814092837-6746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0814 09:29:01.071631    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210814092837-6746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m27.397293768s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210814092837-6746 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210814092837-6746 -- sudo crictl pull busybox: (1.504675584s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210814092837-6746 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210814092837-6746 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (41.486790855s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210814092837-6746 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20210814092837-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210814092837-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210814092837-6746: (2.891772271s)
--- PASS: TestPreload (133.56s)

                                                
                                    
x
+
TestInsufficientStorage (12.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210814093219-6746 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210814093219-6746 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.101151098s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210814093219-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"bbc6d713-c9cd-4712-a032-34316c88af7f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig"},"datacontenttype":"application/json","id":"9b838b3c-fb81-43b4-bb91-f4c8fbd9838d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"4b3b68d3-4228-468e-b75a-1a74a92ca34a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube"},"datacontenttype":"application/json","id":"9ba51fcf-f72d-40b9-a169-04f10026ac33","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=master"},"datacontenttype":"application/json","id":"b37bd3a7-d52d-4aae-8439-d2bc5c693ba6","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"4fcb477c-5669-485a-bdc1-0b6670907301","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"f23ce2fd-10ed-4c06-a3a7-450a26999283","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"fe1dfe7b-918e-4da5-b39a-49d16daedcfa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"c0f30f7e-57ff-41be-9045-913066d66994","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210814093219-6746 in cluster insufficient-storage-20210814093219-6746","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"2cdb4a0e-7e38-4a68-b77c-bdd2213006a7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"d06f85fe-3fbe-42be-acd8-4785db8f700c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"ea911a30-2f15-4021-9c7c-77cac49305c0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"a8d2e4f4-2276-4578-bc73-97942d4033b7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210814093219-6746 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210814093219-6746 --output=json --layout=cluster: exit status 7 (264.14642ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210814093219-6746","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210814093219-6746","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:32:25.921209  127490 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210814093219-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210814093219-6746 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210814093219-6746 --output=json --layout=cluster: exit status 7 (267.492921ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210814093219-6746","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210814093219-6746","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:32:26.189124  127548 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210814093219-6746" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	E0814 09:32:26.199571  127548 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/insufficient-storage-20210814093219-6746/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210814093219-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210814093219-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210814093219-6746: (6.312323304s)
--- PASS: TestInsufficientStorage (12.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (193.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.920099201s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210814093232-6746
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210814093232-6746: (21.259579221s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210814093232-6746 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210814093232-6746 status --format={{.Host}}: exit status 7 (88.576994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.158211084s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210814093232-6746 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd: exit status 106 (150.878567ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210814093232-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210814093232-6746
	    minikube start -p kubernetes-upgrade-20210814093232-6746 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210814093232-67462 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210814093232-6746 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210814093232-6746 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.593660108s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210814093232-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210814093232-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210814093232-6746: (3.088832159s)
--- PASS: TestKubernetesUpgrade (193.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.593711568.exe start -p missing-upgrade-20210814093411-6746 --memory=2200 --driver=docker  --container-runtime=containerd
E0814 09:34:13.235779    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: /tmp/minikube-v1.9.1.593711568.exe start -p missing-upgrade-20210814093411-6746 --memory=2200 --driver=docker  --container-runtime=containerd: (1m19.644789183s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210814093411-6746
version_upgrade_test.go:320: (dbg) Done: docker stop missing-upgrade-20210814093411-6746: (10.470534517s)
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210814093411-6746
version_upgrade_test.go:331: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210814093411-6746 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210814093411-6746 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.902747215s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210814093411-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210814093411-6746
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210814093411-6746: (3.120074672s)
--- PASS: TestMissingContainerUpgrade (142.85s)

                                                
                                    
x
+
TestPause/serial/Start (70.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210814093545-6746 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210814093545-6746 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m10.366525631s)
--- PASS: TestPause/serial/Start (70.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210814093635-6746 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210814093635-6746 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (252.390183ms)

                                                
                                                
-- stdout --
	* [false-20210814093635-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	  - MINIKUBE_LOCATION=master
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:36:35.579669  160108 out.go:298] Setting OutFile to fd 1 ...
	I0814 09:36:35.579750  160108 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:36:35.579758  160108 out.go:311] Setting ErrFile to fd 2...
	I0814 09:36:35.579761  160108 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0814 09:36:35.579878  160108 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/bin
	I0814 09:36:35.580871  160108 out.go:305] Setting JSON to false
	I0814 09:36:35.617568  160108 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":4758,"bootTime":1628929038,"procs":252,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0814 09:36:35.617674  160108 start.go:121] virtualization: kvm guest
	I0814 09:36:35.619975  160108 out.go:177] * [false-20210814093635-6746] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0814 09:36:35.621423  160108 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/kubeconfig
	I0814 09:36:35.620128  160108 notify.go:169] Checking for updates...
	I0814 09:36:35.622976  160108 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 09:36:35.624240  160108 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube
	I0814 09:36:35.625557  160108 out.go:177]   - MINIKUBE_LOCATION=master
	I0814 09:36:35.626072  160108 config.go:177] Loaded profile config "pause-20210814093545-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0814 09:36:35.626191  160108 config.go:177] Loaded profile config "running-upgrade-20210814093236-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:36:35.626285  160108 config.go:177] Loaded profile config "stopped-upgrade-20210814093232-6746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0814 09:36:35.626330  160108 driver.go:335] Setting default libvirt URI to qemu:///system
	I0814 09:36:35.681252  160108 docker.go:132] docker version: linux-19.03.15
	I0814 09:36:35.681347  160108 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0814 09:36:35.772209  160108 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:153 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-14 09:36:35.719396074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0814 09:36:35.772311  160108 docker.go:244] overlay module found
	I0814 09:36:35.774318  160108 out.go:177] * Using the docker driver based on user configuration
	I0814 09:36:35.774341  160108 start.go:278] selected driver: docker
	I0814 09:36:35.774346  160108 start.go:751] validating driver "docker" against <nil>
	I0814 09:36:35.774365  160108 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0814 09:36:35.774421  160108 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0814 09:36:35.774453  160108 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0814 09:36:35.775811  160108 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0814 09:36:35.777729  160108 out.go:177] 
	W0814 09:36:35.777829  160108 out.go:242] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0814 09:36:35.779250  160108 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210814093635-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210814093635-6746
--- PASS: TestNetworkPlugins/group/false (0.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (22.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210814093545-6746 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210814093545-6746 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.355354416s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (22.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210814093902-6746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210814093902-6746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m4.596717186s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.60s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210814093545-6746 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210814093545-6746 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210814093545-6746 --alsologtostderr -v=5: (3.219256633s)
--- PASS: TestPause/serial/DeletePaused (3.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210814093545-6746
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210814093545-6746: exit status 1 (36.905542ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210814093545-6746

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [c017c9e9-fce3-11eb-977c-0242f298e734] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:343: "busybox" [c017c9e9-fce3-11eb-977c-0242f298e734] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.011057486s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210814094108-6746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210814094108-6746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m32.274584516s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (92.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210814093902-6746 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210814093902-6746 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210814093902-6746 --alsologtostderr -v=3: (20.778886014s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746: exit status 7 (115.068015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210814093902-6746 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (87.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210814093902-6746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210814093902-6746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (1m27.370566535s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210814093902-6746 -n old-k8s-version-20210814093902-6746
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (87.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210814094108-6746 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [0bd279fd-d2fb-4e33-980f-ea07917f3bcc] Pending
helpers_test.go:343: "busybox" [0bd279fd-d2fb-4e33-980f-ea07917f3bcc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [0bd279fd-d2fb-4e33-980f-ea07917f3bcc] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.011822074s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210814094108-6746 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210814094108-6746 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210814094108-6746 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210814094108-6746 --alsologtostderr -v=3
E0814 09:42:50.189049    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210814094108-6746 --alsologtostderr -v=3: (20.636026067s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-q7m9p" [ee4fc5ca-fce3-11eb-8319-0242c0a83a02] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010764208s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210814094108-6746 -n no-preload-20210814094108-6746
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210814094108-6746 -n no-preload-20210814094108-6746: exit status 7 (100.413176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210814094108-6746 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (321.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210814094108-6746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210814094108-6746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (5m21.637191946s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210814094108-6746 -n no-preload-20210814094108-6746
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (321.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-q7m9p" [ee4fc5ca-fce3-11eb-8319-0242c0a83a02] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005631391s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210814093902-6746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210814093902-6746 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210814094325-6746 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210814094325-6746 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (1m15.301645026s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [f7b65b9d-1923-4b4e-b278-8ef5cecdd2d7] Pending
helpers_test.go:343: "busybox" [f7b65b9d-1923-4b4e-b278-8ef5cecdd2d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [f7b65b9d-1923-4b4e-b278-8ef5cecdd2d7] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.014355277s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210814094325-6746 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210814094325-6746 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210814094325-6746 --alsologtostderr -v=3: (20.556239802s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746: exit status 7 (87.727652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210814094325-6746 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210814094325-6746 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0814 09:45:41.071777    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:46:07.559113    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:07.564381    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:07.574600    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:07.594826    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:07.635080    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:07.716043    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:07.877030    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:08.197375    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:08.838016    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:10.118546    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:12.679068    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:17.799653    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:28.040522    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:46:48.521368    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:47:29.482279    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 09:47:38.024849    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:47:50.189064    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210814094325-6746 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (5m43.543044728s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210814094325-6746 -n embed-certs-20210814094325-6746
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-g5rms" [b10653e3-cf89-4ad2-bfe0-02bd5e3ab136] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-g5rms" [b10653e3-cf89-4ad2-bfe0-02bd5e3ab136] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.011725973s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-g5rms" [b10653e3-cf89-4ad2-bfe0-02bd5e3ab136] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005933447s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210814094108-6746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210814094108-6746 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (56.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210814095040-6746 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0814 09:50:53.236292    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210814095040-6746 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (56.513056313s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (56.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-s5twx" [a8bd4234-6263-4b5b-a621-d2337301a035] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011109887s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210814094325-6746 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [c5ccb667-8e47-48f8-86b8-4907a3eb4cd5] Pending
helpers_test.go:343: "busybox" [c5ccb667-8e47-48f8-86b8-4907a3eb4cd5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [c5ccb667-8e47-48f8-86b8-4907a3eb4cd5] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.011411659s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210814095040-6746 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210814095040-6746 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210814095040-6746 --alsologtostderr -v=3: (20.803498783s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746: exit status 7 (87.968804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210814095040-6746 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (335.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210814095040-6746 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0814 09:52:38.024966    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:52:40.610316    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:40.615609    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:40.625831    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:40.646058    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:40.686424    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:40.767589    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:40.928667    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:41.249308    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:41.890102    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:43.170871    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:45.731310    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:52:50.188878    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 09:52:50.851861    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:53:01.092579    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210814095040-6746 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (5m34.779799226s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210814095040-6746 -n default-k8s-different-port-20210814095040-6746
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (335.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210814095308-6746 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
E0814 09:53:21.573730    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 09:54:02.534671    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210814095308-6746 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (59.802096612s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210814095308-6746 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210814095308-6746 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210814095308-6746 --alsologtostderr -v=3: (20.663774677s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210814095308-6746 -n newest-cni-20210814095308-6746
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210814095308-6746 -n newest-cni-20210814095308-6746: exit status 7 (110.579122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210814095308-6746 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210814095308-6746 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210814095308-6746 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (34.384967104s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210814095308-6746 -n newest-cni-20210814095308-6746
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210814095308-6746 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210814093634-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd
E0814 09:56:07.559226    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210814093634-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m10.45678012s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210814093634-6746 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210814093634-6746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-jzzb7" [f9acd260-bed4-4093-8a6d-d75aa9c8d287] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-jzzb7" [f9acd260-bed4-4093-8a6d-d75aa9c8d287] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005602708s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210814093634-6746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210814093634-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210814093634-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (73.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210814093636-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd
E0814 09:57:38.025227    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 09:57:40.610633    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210814093636-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m13.559344332s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (73.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-hjr7d" [2936fbc6-dc7a-429f-b4ae-fa739e5e2c42] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013160156s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-hjr7d" [2936fbc6-dc7a-429f-b4ae-fa739e5e2c42] Running
E0814 09:57:50.188510    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006230264s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210814095040-6746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210814095040-6746 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (107.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210814093636-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E0814 09:58:08.296393    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210814093636-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m47.16987147s)
--- PASS: TestNetworkPlugins/group/cilium/Start (107.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210814093636-6746 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210814093636-6746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-g2zwm" [be5febb4-5ad1-4f71-80cd-f605529b0dbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-g2zwm" [be5febb4-5ad1-4f71-80cd-f605529b0dbb] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 9.005472598s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210814093636-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210814093636-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (1m23.926537161s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-p4btv" [6157c566-9aff-470b-b5f4-77afaed8615a] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.012100924s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-9qd4q" [f9772379-cd9b-4a0c-bdfc-67c2e217d4a0] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 6.006177442s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210814093636-6746 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210814093636-6746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-79nmj" [2daa1706-6b15-46d6-8990-be98877de38b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-79nmj" [2daa1706-6b15-46d6-8990-be98877de38b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005287149s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210814093636-6746 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210814093636-6746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-qvsvc" [5777d1fa-b9fc-4f2d-824b-665540cbef68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-qvsvc" [5777d1fa-b9fc-4f2d-824b-665540cbef68] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.006145402s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210814093636-6746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210814093636-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210814093636-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210814093636-6746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210814093636-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210814093636-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (244.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210814093635-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210814093635-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (4m4.849723623s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (244.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210814093635-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd
E0814 10:01:07.558604    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210814093635-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m11.65184459s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-r67tz" [0a7a5b81-27a7-4f93-be20-10ee79799b58] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.011367942s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210814093635-6746 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210814093635-6746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-t6pwf" [b6bb06fb-c55a-46d3-bc9b-7bfb433463d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-t6pwf" [b6bb06fb-c55a-46d3-bc9b-7bfb433463d2] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.0054197s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210814093635-6746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210814093635-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210814093635-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210814093635-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0814 10:01:37.536193    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:01:38.176841    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:01:39.457626    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:01:42.018306    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:01:44.090124    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:44.095358    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:44.105621    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:44.125753    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:44.166211    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:44.246614    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:44.407531    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:44.727914    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:45.368739    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:46.648938    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:47.139286    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:01:49.209192    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:54.330001    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:01:57.380006    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:02:04.570879    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:02:17.860445    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:02:21.072923    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 10:02:25.051415    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:02:30.603707    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 10:02:38.024709    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/addons-20210814090521-6746/client.crt: no such file or directory
E0814 10:02:40.610450    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/no-preload-20210814094108-6746/client.crt: no such file or directory
E0814 10:02:50.189065    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/functional-20210814091034-6746/client.crt: no such file or directory
E0814 10:02:58.821623    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
E0814 10:03:06.011929    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:03:10.974110    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:10.979353    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:10.990454    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:11.010680    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:11.050921    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:11.131870    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:11.292864    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:11.613267    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:03:12.253993    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210814093635-6746 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (1m35.246147707s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210814093635-6746 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (7.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210814093635-6746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-jtbb2" [027666f3-278b-4953-9a44-b5d7c4efe2c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0814 10:03:13.534928    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-jtbb2" [027666f3-278b-4953-9a44-b5d7c4efe2c3] Running
E0814 10:03:16.095460    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 7.005477637s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (7.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210814093635-6746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210814093635-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210814093635-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210814093635-6746 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210814093635-6746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-n8dft" [e6d7a69b-e17c-417a-a243-1a1e3582edf7] Pending
helpers_test.go:343: "netcat-66fbc655d5-n8dft" [e6d7a69b-e17c-417a-a243-1a1e3582edf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-n8dft" [e6d7a69b-e17c-417a-a243-1a1e3582edf7] Running
E0814 10:04:20.742250    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/default-k8s-different-port-20210814095040-6746/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005782345s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210814093635-6746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210814093635-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210814093635-6746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)
E0814 10:04:27.932210    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/auto-20210814093634-6746/client.crt: no such file or directory
E0814 10:04:32.897198    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.076059    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.081314    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.091629    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.112163    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.152393    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.233305    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.394215    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:47.715004    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:48.355823    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.273098    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.278443    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.288721    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.309123    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.350046    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.431126    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.591378    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.636550    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:49.912142    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:50.552759    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:51.833652    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:52.197065    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:54.394003    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:57.317720    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:04:59.514639    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:05:07.558721    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:05:09.754906    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:05:28.039160    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:05:30.236067    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:05:54.817904    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/custom-weave-20210814093636-6746/client.crt: no such file or directory
E0814 10:06:07.559305    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/old-k8s-version-20210814093902-6746/client.crt: no such file or directory
E0814 10:06:09.000037    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/calico-20210814093636-6746/client.crt: no such file or directory
E0814 10:06:11.196198    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/cilium-20210814093636-6746/client.crt: no such file or directory
E0814 10:06:20.464880    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:20.470138    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:20.480403    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:20.500701    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:20.540927    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:20.621061    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:20.781256    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:21.101792    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:21.742514    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:23.023449    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:25.583755    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory
E0814 10:06:30.704190    6746 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-3823-c3c4d0455dfed89650fdf54f9f70d551912b4969/.minikube/profiles/kindnet-20210814093635-6746/client.crt: no such file or directory

                                                
                                    

Test skip (24/264)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210814095039-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210814095039-6746
--- SKIP: TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210814093634-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210814093634-6746
--- SKIP: TestNetworkPlugins/group/kubenet (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210814093635-6746" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210814093635-6746
--- SKIP: TestNetworkPlugins/group/flannel (0.48s)

                                                
                                    
Copied to clipboard